Every computational device must allow for some form of interaction with its user. Human–computer interaction is the discipline that studies how people interact with computational devices and the implications that the design of the human–machine interface has for this interaction. The discipline also investigates wider communal and social implications associated with user interface design and the use of digital systems and artifacts.
Technological Steps
The earliest user interface to a practical computer was the punched card system devised by Joseph Marie Jacquard in 1804. This system (remarkably like the punched cards used for data entry well into the 1970s) enabled weavers to instruct a loom in the design of a textile. The first modern computers emerged in the mid-1940s. Shortly thereafter, human– computer interaction leapt forward when Grace Hopper, an employee at Remington Rand Corporation, invented the compiler around 1950. The compiler allowed programmers to write in languages closer to English instead of in machine code (directly comprehensible in the computer’s binary system of understanding). A series of new programming languages followed. Many languages still in use today, such as Fortran and C, date from the 1960s and early 1970s.
A number of breakthroughs in the 1970s began to open computing to nonprogrammers. Arguably the most revolutionary development was the invention in 1979 of VisiCalc, a spreadsheet program that moved accounting from paper ledgers to the computer. This program was embraced by ordinary people, who found countless uses for it from running small businesses, to planning the family vacation budget, to forecasting expenses in corporations. VisiCalc’s power was its user interface, which enabled people to view the rows and columns of numbers just as they would on paper. Even today spreadsheets are used by engineers, scientists, and businesses as a primary tool of calculation.
The other major development was a radical redesign of the computer to fit on a desktop. The Alto computer, invented at the Xerox Palo Alto Research Center in California, not only fit on a desk, it incorporated ideas such as the mouse (developed by Douglas Engelbart at SRI in Menlo Park, California, in 1964), WYSIWYG editors, and bit-mapped graphical displays to provide a graphical view of the user’s data and applications.
Contributions Of The Human–Computer Interaction Community
Emergence Of The Community
In 1982, the Association for Computing Machinery’s SIGCHI Conference on Human Factors in Computing Systems was held in Gaithersburg, Maryland. This conference marked the formal beginning of human–computer interaction as a distinctive intellectual community. Although the Conference referenced “human factors,” the concerns and methods of the human–computer interaction (HCI) community are only loosely connected to traditional human factors research, which is rooted in ergonomics and goes beyond computers.
The HCI community is a coalition of computer scientists, psychologists, sociologists, anthropologists, and artists, who share an interest in the design of technology as well as in analysis of its effects on people. Computer science and psychology are most heavily represented, but the community is open and interdisciplinary. It welcomes research from academia and industry as well as practitioners who confront issues of human–computer interaction in practical ways. The community is international, with many active members in North America and Europe, as well as in Japan, Latin America, and elsewhere. The HCI community, as it has developed since Gaithersburg, is best exemplified by scholars and practitioners who gather at conferences such as ACM SIGCHI (known as CHI), NordiCHI, Interact, and HCI International.
The proceedings of these conferences are a rich source of information on human–computer interaction. The CHI conference, for example, functions much as a selective journal (with a low acceptance rate) and showcases some of the most interesting work in human–computer interaction. The contributions of the human–computer interaction community can be roughly categorized in three areas: the design of new digital technologies, the evaluation of the usability of technologies, and cognitive and social analyses of the use of technology.
The Design Of Technologies
In 1998, Brad Myers at Carnegie Mellon University published a thoughtful review, “A brief history of human–computer interaction technology,” which addressed new technologies that have come from the HCI community (Myers 1998). Myers described the provenance of familiar interaction technologies such as overlapping windows, icons, drawing programs, text editors, hypertext, and speech recognition. He noted that tools for building user interfaces, such as user interface management systems and widget toolkits, are part of the legacy of the HCI community. Component architectures for user interfaces, predecessor to Web 2.0 techniques such as “mashups” (which combine data or procedures from more than one application) were pioneered at Carnegie Mellon in 1983 in the Andrew system. Myers observed that most important innovations were funded by government, primarily the US government, although industry also played a role. Practically all the inventions Myers analyzed were developed at universities, industrial research laboratories, or think tanks such as SRI or the Rand Corporation.
Since Myers’s review, one of the biggest successes from the HCI community has been recommender systems such as those used by Amazon.com to help prospective buyers evaluate potential purchases. Buyers are informed about what those with similar preferences purchase based on data which are clustered according to user characteristics. The earliest version of a recommender system was part of a project at Xerox PARC in the 1980s.
Usability
The second major contribution of the HCI community is an activist one. Human– computer interaction researchers have insisted that computer users have the right to well-designed applications that are easy to use. Don Norman has been especially vocal in defending the needs of users. The delightful cover of Norman’s book, The design of everyday things (1990), depicts a teapot with the spout and handle vertically aligned. This image has become an icon for the need for sensible design.
Usability is held as one of the most important values of the HCI community. Several techniques for evaluating usability have been invented and are widely deployed in academic research and in industry. Formal, controlled laboratory studies are conducted in academic settings and at usability labs in companies such as Microsoft and Oracle. Other methods such as heuristic evaluation and cognitive walkthroughs provide less precise but more cost-effective results and have been shown to improve usability (Dix et al. 1993). Usability is not merely a good idea but a thriving practice that improves the products we all use. Of course, there is room for much improvement, and Norman described some of the organizational barriers in industry to usable design (Norman 1998).
Analyses For Human–Computer Interaction
The third contribution of the human–computer interaction community has been cognitive and social analyses of human–computer interaction. The most robust, integrated body of work is in computer-mediated communication, which examines how people communicate in digital media including email, video, instant messaging, blogs, wikis, websites, listservs, and social networking websites such as Facebook and MySpace.
For example, it has been found across several studies conducted in different countries that in non-work contexts, users of cell phones, instant messaging, and social networking websites communicate primarily with small groups of well-known friends and family. These technologies do not expand social life but deepen it with those already known. Studies have demonstrated that, rather than isolating people, participation in online communities provides important social functions such as support for people (and their families) with unusual medical conditions, or casual socializing in online gaming communities (a substitute for the corner bar, which no longer exists in many North American cities and suburbs). A large body of cognitive research investigates the ways people interact with specific aspects of the user interface such as menus and scroll bars or how speech interfaces influence the user experience.
Theories Of The Field
The most theoretical work in human–computer interaction examines issues of cognition and sociality. The cognitive analyses of Card et al. (1983) laid the foundation for usability evaluation. More recently, as the purview of human–computer interaction is taken to be the wider social context in which people interact with digital technologies, new theories have entered the community, in particular, distributed cognition, activity theory, and actor–network theory . Distributed cognition and actor– network theory posit flat networks in which both people and technologies are “nodes” or “actants.”
These theories are concerned with informational flows and the propagation of changing state across networks. Activity theory comes from a psychological tradition based on Vygotsky’s (1986) work and views relations between people and technology more asymmetrically, with technology mediating reality for people. Human abilities differ from those of things; thus activity theory is more open to discussions of creativity, imagination, and invention, which are distinctively human (Stahl 2006). Actor–network theory and distributed cognition, on the other hand, have drawn attention to the agency of things. Kaptelinin and Nardi discuss these theories in the context of interaction design (2006), proposing an extension to activity theory based on notions of agency derived from actor– network theory.
Human–computer interaction is a young field that can claim considerable success. It has advanced our understanding of relations between people and digital technologies and provided many of the technologies that we encounter on a daily basis. A void in this tradition is critical analysis of digital technology. The HCI community has taken a positive attitude toward technology for a variety of reasons. The connection to industry is an obvious reason and the love of technology is another. This bias has meant that scholars who are perhaps in the best position to provide critical analysis because they have both deep technical understanding and an orientation to human use of technology, have not examined the costs as well as the benefits of technology. The attitude is that one designs technology and the “marketplace” will correctly choose those technologies that meet social needs.
Critical Analyses
Critical analyses are therefore a needed counterpoint to the unexamined enthusiasm for ideas such as ubiquitous computing, embraced by the human–computer interaction community. Certainly, we must ask if every human experience should be mediated by digital technology. Another pertinent issue concerns the hidden trade-offs between usability (ease of use) and human choice. There is little doubt that smooth user interfaces facilitate the use of computers and are inviting, in the sense of inciting people to explore the possibilities computational devices offer. However, technologies influence human behavior not only by means of what can be done with them but through what cannot be accomplished by the functionalities they offer (Kallinikos 2002, 2004). Simplifying the user interface often implies that some options are eliminated. Human–computer interaction techniques may unwittingly contribute to a narrowing of the space of possible options. The history of skills in the course of industrial capitalism suggests that removing complexity from the interface and stacking it “underneath” is a double-edged gesture (Mumford 1952). Very little is currently known about these trade-offs.
Apart from the social ramifications, there are externalities such as the toxic waste produced by digital devices, which has been shown to pollute groundwater as billions of devices are eventually dumped in landfills. Promising work that addresses environmental concerns proposes creating a practice of “sustainable interaction design” (Blevis 2007). In the future, considerations of social responsibility in human–computer interaction should extend from current concerns with providing technology appropriate for people with disabilities and on low incomes to more problematic issues of restraining and reshaping technology.
References:
- Blevis, E. (2007). Sustainable interaction design: Invention and disposal, renewal and reuse. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York: ACM Press, pp. 503 –512.
- Card, S., Moran, T., & Newell, A. (1983). The psychology of human computer interaction. Hillsdale, NJ: Lawrence Erlbaum.
- Dix, A., Finlay, J., Abowd, G., & Beale, R. (1993). Human–computer interaction. New York: Prentice Hall.
- Kallinikos, J. (2002). Re-opening the black-box of technology: Artifacts and human agency. In L. Applegate, R. Galliers, and J. I. De Gross (eds.), Proceedings of the 23rd International Conference in Information Systems. Barcelona, Spain, pp. 287–294.
- Kallinikos, J. (2004). Farewell to constructivism: Technology and context-embedded action. In C. Avgerou, C. Ciborra, & F. Land (eds.), The social study of information and communication technology. Oxford: Oxford University Press, pp. 140 –161.
- Kaptelinin, V., & Nardi, B. (2006). Acting with technology: Activity theory and interaction design. Cambridge MA: MIT Press.
- Mumford, L. (1952). Arts and technics. New York: Columbia University Press.
- Myers, B. (1998). A brief history of human–computer interaction technology. ACM Interactions, 5, 44 –54.
- Norman, D. (1990). The design of everyday things. New York: Perseus Books.
- Norman, D. (1998). The invisible computer. Cambridge MA: MIT Press.
- Stahl, G. (2006). Group cognition. Cambridge MA: MIT Press.
- Vygotsky, L. (1986). Thought and language. Cambridge MA: MIT Press.