Heron of Alexandria (first century ad) was the first chronicler of a peculiar mechanism capable of holding the flame of oil lamps steady. Later, similar mechanisms were found in water clocks. In the eighteenth century, they reappeared in the regulator of Watt’s steam engine, which drove industrial production. Since 1910, engineers have called them servomechanisms. Norbert Wiener (1948, 1950), mathematician at MIT, realized that the theory underlying them had far wider applications, linked them to communication, and called the study of these phenomena “cybernetics . . . the science of control and communication in the animal and the machine”. He derived “cybernetics” from the Greek kybernetes or “steersman.” Before him, Ampere had used the word to designate a science of government, without, however, developing the idea.
Cybernetics became established during a series of interdisciplinary meetings held between 1944 and 1953. It brought together some of the most important postwar intellectuals, including Wiener, John von Neumann, Warren McCulloch, Claude Shannon, Heinz von Foerster, Ross Ashby, Gregory Bateson, Margaret Mead, and Alex Bavelas, and became known as the Macy Conferences on Cybernetics (Heims 1991). Erving Goffman presented his early sociological ideas at these meetings as well. Cybernetics quickly expanded its conceptions to embrace neural networks (McCulloch), communication patterns in groups (Bavelas), anthropological concerns (Mead), mind (Ashby, Bateson), management (Stafford Beer), and political systems (Karl Deutsch).
Cybernetics did not establish itself as a discipline with its own academic institutions. It remained an interdiscipline that gave birth to numerous specializations; for example, mathematical communication theory, control theory, automata theory, neural networks, computer science, artificial intelligence, game theory, information systems, family systems theory, and constructivism. Many of its revolutionary ideas – feedback, digitization, simulation, autonomy, networks, homeostasis, and complexity – are now embraced by established disciplines. In the 1970s, cyberneticians shifted gears, recognizing that the known reality cannot be separated from the process by which it is explored and constructed. This gave rise to second-order cybernetics, fundamental to the social sciences.
The use of the prefix “cyber-” in popular literature about intelligent artifacts, digital media, and globalization of communication – fueling a sense of liberation from authorities, geography, and technological determinisms – is evidence for how well the fruits of cybernetics are growing in contemporary culture, if in a somewhat shallow soil.
Initially, cybernetics explored models of phenomena, such as mentioned above, without reference to their materiality, ultimately favoring impersonal mathematical formulations. The power of computer programs, for example, does not depend on who uses them and on which machine they run. Their virtue lies in their reproducibility. The a-materiality of early cybernetic concepts fertilized many fields.
Ashby (1956) characterized cybernetics as the science of all possible systems, which is “informed” when some of them cannot be built or evolve in nature. This definition departs from the traditional preoccupation of science with generalizations from observational data. Bateson (1972) related this characterization to the theory of evolution.
Second-order cybernetics, or the cybernetics of cybernetics, the control of control, and the communication of communication (von Foerster 1996), by contrast, acknowledges that all explanations reside in language, the language of a community – cyberneticians, for example. It amounts to a shift from the earlier preoccupation with disembodied models to constructions that recognize the observer as a participant in them, embodied knowing.
Cybernetic explanations can be described as resting on four conceptual pillars: circularity, information, process, and participation. These are not separate ideas. They join in the cybernetic project of overcoming numerous orthodoxies of contemporary science.
Just as arguments proceed from premises to conclusions, Newtonian science proceeds from causes to effects. Cyberneticians, by contrast, are fond of circular causalities such as when A affects B, B affects C, and C affects A in return. They include linear causalities as less interesting cases. In systems that embody such circularities, every part affects itself via others, causes and consequences are no longer clearly distinguishable, and a dynamics is set in motion that resists outside interventions.
Circular causality is familiar in home heat control. The thermostat does not respond to room temperatures, but to the difference between actual and desirable temperatures. This difference determines another difference, whether a heater is on or off. When it is off, the room slowly loses heat. When it is on, the heater increases the room temperature up to the point where the difference has disappeared and room temperature stays where it is set. Feedback loops reside in more complex technologies (target-seeking missiles, automated production), biology (regulation of bodily functions), economics (supply and demand of goods and services), and in social life (the giving and taking required to maintain friendships). In human communication, feedback is explicit in verbal comments to what one said, and implicit in responses to differences between observed and expected consequences of one’s actions. Feedback is a necessary condition for learning and for maintaining homeostasis, whether within organisms, families, or social organizations.
Systems with feedback loops either converge on a stable state, or diverge, leading to meltdowns, fights, and breakups. The former is called negative feedback, as deviations are reduced. The latter is called positive feedback, as deviations become amplified. While most technical devices are designed to converge on a preferred state, social interactions often escalate, unintentionally as in arms races, or intentionally, to achieve various kinds of growths, for example in business or individuals in therapy. Circular causalities underlie all purposive systems and explain their capacity to compensate for variations in their environment, control themselves, and display a “will of their own.”
Tools usually are engineered as open functions, a means for humans to control something else – driving a car or influencing an audience. To understand systems with loops, including feedback, their output must re-enter their input, at least in part. The functions that enable self-application are called recursive. Examples of self-application are copying a document repeatedly, which eventually results in a version that resists what the copier can do to it, or taking the square root of any positive number over and over again, which converges to 1.000. Fractal geometry utilizes recursive functions by repeatedly inserting a figure into its own parts, generating a complex configuration. Individuals converge to their identity by recursively using their name, developing behaviors that are uniquely traceable to them, and interacting with others consistently.
Self-application ushered in the computer revolution. Early ideas of computation maintained a clear distinction between the data and the operations transforming them. It occurred to von Neumann to store programs in the same memory as the data to which they were applied. This opened the possibility of programming computers so that programs could call on other programs, including on themselves. He also theorized a special case of selfapplication, the ability of machines to reproduce themselves. While his work did not create self-reproducing robots, his ideas resurfaced in communication theory (reproducibility), and recently informed Richard Dawkins’s memetics.
A self-organizing system is one that develops in its own terms, without instruction from, copying of, or adapting to features of an external environment. The biological makeup of living organisms, for example, cannot be explained from where they live. Their history is their best explanation. While social organizations may well learn from each other, they also develop indigenous forms. Ashby’s law of self-organization specifies two properties. First, to the extent the parts of a system engage in recurrent interactions, they converge on stable regions – or attractors – in the process of which their networks of communication become increasingly orderly. Second, by moving from one attractor to another, the freedom to move is transformed into indigenously orderly conduct. Von Foerster demonstrated that systems are self-organizing by decreasing their entropy (increasing order) inside, or increasing entropy (decreasing order) outside, or both. He also showed that random perturbations can speed up self-organization, and called this phenomenon the “order from noise” principle.
Fundamental to all self-organizing systems is that they defy control from their environment as well as understanding by external observers. This is due to their constitution as relatively closed circular networks of communication. Conversations, for example, are self-organizing, and it is hard indeed to know what is going on in one without joining. This difficulty is magnified when seeking to understand social organizations, which cyberneticians describe as networks of conversations. Self-organization presents profound methodological challenges to the study of communication.
Autopoiesis or self-production is a process that Humberto Maturana and Francisco Varela (1987) identified as the condition for living systems to practice their living. Autopoietic systems consist of recursive networks of interactions among components that produce all components necessary for such a network to operate, maintain a boundary within which their autonomy is practiced, and thus continually regenerate themselves under conditions of continuous perturbations from their outside. Reproduction, cognition, purposiveness, the efficient use of energy, and even survival are epiphenomena of the autopoietic organization of a living system. Autopoiesis explains, for example, that humans continuously change their organic makeup without losing their identity as human beings. Allopoietic systems, by contrast, produce something other than their own components. A bakery, for example, may feed its owner metaphorically but does not produce what it takes to bake. Autopoietic systems can distinguish among perturbations and compensate for them by changing their structure, but have no access to the nature or sources of these perturbations.
For Maturana, autopoiesis is embodied in the components that constitute the selfgenerating networks of production. It cannot be separated from where it is embodied. Therefore, he distanced himself from sociological theorists like Niklas Luhmann, who appropriated the term in his disembodied conception of social systems.
Cybernetic conceptions of information emerged in studying purposive systems, which must respond appropriately to deviations from their goals. Bateson observed that circular systems, from simple feedback loops, to manifestations of mind and social systems, do not respond to physical stimuli but to differences between them. They are triggered by differences, process differences and differences of differences, and may in turn affect the differences that triggered the process. He defined information as the differences that make a difference at a later point. This places information on a logical level higher than the physical phenomena it relates, and therefore it cannot be explained or measured in physical terms. To highlight that information is created within a system, not representative of realities external to it, Varela proposed to write the words as “in-formation.”
Shannon introduced binary digits (bits) as the unit for measuring the extent to which messages from one point are reproduced at another point. His information calculus gave rise to several concepts, such as noise, the extent to which entropy surreptitiously enters a communication channel and inexplicably pollutes messages; equivocation, the extent to which messages lose their details in the course of transmission; and redundancy, unused or wasted channel capacity. His tenth theorem asserts that the effect of noise can be eliminated by adding an amount of redundancy to the channel that is equal to or larger than the measure of noise. Redundancy has made contemporary information technology extraordinarily reliable. Language is said to be about 70 percent redundant, allowing people to understand verbal expressions even when barely audible, mispronounced, or containing spelling errors.
Ashby explored the complement of communication. Purposive systems, he noted, must counteract the entropy in their environment, for example, reducing the effects of external temperature fluctuations on the room temperature, and to accomplish this requires internal variability. To remain competitive, business organizations need to respond to environmental changes and must have uncommitted organizational variety at their disposal. Ashby’s law of requisite variety states that effective regulators or, more generally, purposive systems, must possess an internal variety equal to or larger than the variety of the disturbances of that system; in short: “only variety can destroy variety” (Ashby 1956, 207). Ashby’s Law includes Shannon’s tenth theorem as a special case. They merely pursue opposite aims of information processing.
Philosophically, cyberneticians’ preference for processes favors ontogenesis over ontology, accounting for processes of evolution rather than its products, constructing reality rather than describing what exists. An example from early cybernetics is the development of algorithms. Adding time to propositional logic, which is limited to stating facts, creates an algorithmic language, capable of describing processes and essential to programming computers. Shannon’s celebrated MIT thesis enabled networks of relays – radio tubes in the case of ENIAC, the first operational computer – to be treated algebraically and hence far easier to program efficiently. He concluded before von Neumann did that anything that can be stated logically can be converted into an algorithm and hence becomes computable.
Adding time also led Varela to his calculus of self-reference. Self-references had threatened the consistency of logical systems for centuries. Whitehead and Russell identified them as the chief culprit of problems in logic, considered them an abnormality of language, and proposed a “theory of logical types” to rule them out of the language of science. Yet Zen Buddhism has employed paradoxes as powerful teaching devices for centuries, and psychotherapy has recently learned to use them in the treatment of psychopathologies. Human communication scholarship, especially by Bateson (1972) and the Palo Alto School that elaborated his ideas, developed its own responses to Russell’s problem, for example, in the concept of meta-communication and the distinction between content and relational communication (Ruesch & Bateson 1951; Watzlawick et al. 1967). Utilizing Spencer Brown’s laws of form, Varela dissolved the “viciousness” of such paradoxes by describing their eigen-dynamics – from assuming Epimenides is telling the truth, it follows he is lying, it follows he is telling the truth, it follows he is lying, ad infinitum. Dissolving paradoxes in time renders them analyzable, no longer threatening. Varela characterized self-reference as “the infinite in finite guise.” Shifting from the study of writings to processes of communication did much the same for communication research.
Explaining systems in terms of what they do rather than what they are is also reflected in efforts at verbing nouns, like speaking of observing instead of observers, languaging instead of language, governing instead of government, or processes of living instead of life – but verbing may not be enough.
Contemporary computers are serial machines – characterized by changing from one state to the next. Social phenomena, by contrast, involve massively parallel processes: different things happen simultaneously, raising doubts about the ability of serial models to help understanding of social processes. The parallel computer is a promising alternative. Heralded as a recent idea, it goes back to early cybernetic conceptions, for example in McCulloch’s neural nets, Frank Rosenblatt’s perceptron, and von Foerster’s pattern recognizer.
Digital computers also are nonlinear machines. Many social theories, especially of the effects of mass communication, are still conceptualized in terms of linear functions – relating changes in one variable to changes in another. Although digital computers can be tamed to compute linear functions, this is not their strength. Recently, social scientists have discovered that nonlinear models can offer powerful simulations of social phenomena.
The cybernetic preoccupation with processes and ontogenesis also prepared the ground for its constructivism, an effort to understand reality not as given but as brought forth in the process of enacting one’s constructions of reality. Invoking actions that create realities implies that realities could have been otherwise, liberates scientific explorations from the determinism of causal explanations, and opens the possibility of changing what is undesirable.
Participation or Second-Order Cybernetics
Werner Heisenberg’s uncertainty principle asserts a limit to observing a system’s states whenever the act of observation affects them. This principle troubled the idea of knowing by detached observation, celebrated since the Enlightenment, fundamental to the natural sciences, and emulated by social scientists as well. The shift from first-order to secondorder cybernetics takes note of Heisenberg’s epistemological challenge.
This shift was gradual. Early cyberneticians, while building models, already reflected on their own role. Shannon theorized his ability to design error-correcting codes that would resist noise in communication channels. Ashby explored the difference between what he, as the designer of a system, knew about its internal working and what he could learn about that system by observing the consequences of experimenting with it – describing that system without knowledge of its history or makeup, as a black box. The idea of a black box became the paradigm for studying systems that have no designer or cannot be taken apart, such as a living brain or an economy. In modeling such systems, cyberneticians soon became aware that their models defined, if not created, the conceptions of the modeled. This rendered the human modeler a key to understanding what was to be known. Following up on such threads, the anthropologist Mead (1968) suggested that cyberneticians apply cybernetic principles to their own work, a suggestion that von Foerster (1996) formulated as the inclusion of the observer in the observed and called “second-order cybernetics.” However, Mead went further, asking cyberneticians to explore systematically how societies organize themselves around cybernetic ideas and to assume responsibility for what the language of cybernetics brings forth – in effect, asking cyberneticians to acknowledge their role as unwitting change agents or designers. This prompted Klaus Krippendorff to define second-order cybernetics as the cybernetics of participating in systems under continuous reconstruction by their constituents, invoking a participatory epistemology.
Second-order cybernetics highlights the use of language – language not as a system of representations of an objective reality, which Richard Rorty (1979) aptly criticized, but as a Wittgensteinian language game or as Maturana’s coordination of coordination of action. Coordination is already evident, for example, when people kick a ball to each other. However, it is in language that soccer is distinguished from other sports and its rules are invoked to determine the progress of the game. In this conception, language brings forth consensual coordinations, in the process of which people acknowledge each other and direct their constructive attention to phenomena conceptualized in language. Conversation is one such language game. Second-order cybernetics takes conversation as the site where realities are co-constructed.
In the social domain, unless the products of communication research are purely academic or concern necessary causes, the very act of publishing social theories and making them available to interested parties, especially to those theorized therein, encourages people to respond. Whether people take advantage of such theories, and if applicable to themselves, conform to their claims, or act to counter them, any of these responses invariably change the theorized reality right in front of its theorist’s eyes. The communication of social theories can affect what they theorize, change their own validity, and create new realities. This renders social theorists constitutive participants in the very realities they are describing. (Without realizing his own culpability in that constructive circularity, Anthony Giddens referred to this process as double hermeneutics).
We know many constructive circularities of this kind. One example is self-fulfilling prophesies (or hypotheses), dreaded by sociologists like Robert Merton, who prefer that the truth-value of their theories be objectively verifiable, not changing in the course of their dissemination. Another well-known example is Elisabeth Noelle-Neumann’s spiral of silence, which takes effect when citizens are informed of where they stand relative to a political majority. Economic crises, the growth of social movements, and many psychological problems can be explained in linguistic terms, as consensual coordination of understanding and actions. Herbert Simon realized the effects of publishing polling results on voting behavior, and suggested that valid election predictions must build these effects into whatever is published.
Mead realized that upholding the illusion of detached observers is no longer an option for cyberneticians when cybernetic principles begin to transform society. To the extent that scientific findings are socially relevant, scientists need to see themselves as participants in the very systems they claim to describe but unwittingly influence if not create. The “god’s eye view” that scientists have enjoyed since the Renaissance reveals itself as a convenient way to avoid responsibility for the inevitable circularity of scientific involvement. Second-order cybernetics can be likened to an insider’s, not a meta-, perspective.
In the domain of cognition, McCulloch’s work on neuronal networks, Maturana’s experiments on the circularity of perception and action, and his and Varela’s biology of cognition are embraced by what Ernst von Glasersfeld (1995) calls radical constructivism (Watzlawick 1984). It acknowledges that humans cannot leave their own nervous system and enter an external world to determine what they are seeing. Our nervous system does not map the Cartesian “universe” but constructs realities suitable for living and interacting with others. In enacting them, constructions of reality may be retained when they lead to what one expects, revised or expanded when they are seen as improvable, lessons to learn from, but when they prove fatal, the constructions and their beholders vanish. Thus, constructions of reality are subject to evolution. They are what can bring down the autopoiesis of living systems.
Second-order cybernetic epistemology accepts Maturana’s injunction against theorizing what violates human bodily capabilities. Its application to social phenomena leads to the study of embodied social phenomena, grounded in practices of living and communication, each enacting their own understanding, responding to their understanding of others’ understanding, and generally expanding the possibilities of living together.
As an interdiscipline, cybernetics continues to create challenging conceptions and takes responsibilities for the realities that follow from them.
- Ashby, W. R. (1956). An introduction to cybernetics. London: Chapman and Hall.
- Bateson, G. (1972). Steps to an ecology of mind. New York: Ballantine.
- Heims, S. J. (1991). The cybernetics group. Cambridge, MA: MIT Press.
- Maturana, H. R., & Varela, F. J. (1987). The tree of knowledge: Biological roots of human understanding. Boston, MA: Shambhala.
- Mead, M. (1968). The cybernetics of cybernetics. In H. von Foerster, J. D. White, L. J. Peterson, & J. K. Russell (eds.), Purposive systems: First annual symposium of the American Society for Cybernetics. New York: Spartan, pp. 1–11.
- Rorty, R. (1979). Philosophy and the mirror of nature. Princeton, NJ: Princeton University Press.
- Ruesch, J., & Bateson, G. (1951). Communication, the social matrix of psychiatry. New York: W. W. Norton.
- von Foerster, H. (ed.) (1996). Cybernetics of cybernetics, 2nd edn. Minneapolis, MN: Future Systems. (Original work published 1974).
- von Glasersfeld, E. (1995). Radical constructivism. Washington, DC: Falmer Press.
- Watzlawick, P. (1984). The invented reality. New York: W. W. Norton.
- Watzlawick, P., Beavin, J. H., & Jackson, D. D. (1967). Pragmatics of human communication: A study of interactional patterns, pathologies, and paradoxes. New York: W. W. Norton.
- Wiener, N. (1948). Cybernetics, or control and communication in the animal and machine. New York: John Wiley.
- Wiener, N. (1950). The human use of human beings: Cybernetics and society. Boston, MA: Houghton Mifflin.
Back to Communication Theory and Philosophy.