A specific comparative research methodology is known in most social sciences. Its definition often refers to countries and cultures at the same time, because cultural differences between countries can be rather small (e.g., in Scandinavian countries), whereas very different cultural or ethnic groups may live within one country (e.g., minorities in the United States). Comparative studies have their problems on every level of research, i.e., from theory to types of research questions, operationalization, instruments, sampling, and interpretation of results.
The major problem in comparative research, regardless of the discipline, is that all aspects of the analysis from theory to datasets may vary in definitions and/or categories. As the objects to compare usually belong to different systemic contexts, the establishment of equivalence and comparability is thus a major challenge of comparative research. This is often “operationalized” as functional equivalence, i.e., the functionality of the research objects within the different system contexts must be equivalent. Neither equivalence nor its absence, “bias,” can be presumed. It has to be analyzed and tested for on all the different levels of the research process.
Equivalence And Bias
Equivalence has to be analyzed and established on at least three levels: on the levels of the construct, the item, and the method (van de Vijver & Tanzer 1997). Whenever a test on any of these levels shows negative results, a cultural bias is supposable. Thus, bias on these three levels can be described as the opposite of equivalence. Van de Vijver and Leung (1997) define bias as the variance within certain variables or indicators that can only be caused by culturally unspecific measurement. For example, a media content analysis could examine the amount of foreign affairs coverage in one variable, by measuring the length of newspaper articles. If, however, newspaper articles in country A are generally longer than they are in country B, irrespective of their topic, the result of a sum or mean index of foreign affairs coverage would almost inevitably lead to the conclusion that the amount of foreign affairs coverage in country A is higher than in country B. This outcome would be hardly surprising and not in focus with the research question, because the countries’ average amount of foreign affairs coverage is not related to the national average length of articles. To avoid cultural bias, the results must be standardized or weighted, for example by the mean article length.
To find out whether construct equivalence can be assumed, the researcher will generally require external data and rather complex procedures of culture-specific construct validation(s). Ideally, this includes analyses of the external structure, i.e., theoretical references to other constructs, as well as an examination of the latent or internal structure. The internal structure consists of the relationships between the construct’s sub-dimensions. It can be tested using confirmatory factor analyses, multidimensional scaling, or item analyses. Equivalence can be assumed if the construct validation for every culture has been successful and if the internal and external structures are identical in every country. However, it has to be stated that it is hardly possible to prove construct equivalence beyond any doubts (Wirth & Kolb 2004).
Even with a given construct equivalence, bias can still occur on the item level. The verbalization of items in surveys and of definitions and categories in content analyses can cause bias due to culture-specific connotations. Item bias is mostly evoked by bad, in the sense of nonequivalent, translation or by culture-specific questions and categories (van de Vijver & Leung 1997). Compared to the complex procedures discussed in the case of construct equivalence, the testing for item bias is rather simple (once construct equivalence has been established): Persons from different cultures who take the same positions or ranks on an imaginary construct scale must show the same answering attitude toward every item that measures the construct. Statistically, the correlation of the single items with the total (sum) score have to be identical in every culture, as test theory generally uses the total score to estimate the position of any individual on the construct scale. In brief, equivalence on the item level is established whenever the same sub-dimensions or issues can be used to explain the same theoretical construct in every country (Wirth & Kolb 2004).
When the instruments are ready for application, method equivalence comes to the fore. Method equivalence consists of sample equivalence, instrument equivalence, and administration equivalence. Violation of any of these equivalences produces a method bias. Sample equivalence refers to an equivalent selection of subjects or units of analysis. Instrument equivalence deals with the examination of whether people in every culture agree to take part in the study equivalently, and whether they are used to the instruments equivalently (Lauf & Peter 2001). Finally, a bias on the administration level can occur due to culturespecific attitudes of the interviewers that might produce culture-specific answers. Another source of administration bias could be found in socio-demographic differences between the various national interviewer teams (van de Vijver & Tanzer 1997).
The Role Of Theory
Theory plays a major role in three dimensions when looking for a comparative research strategy: theoretical diversity, theory drivenness, and contextual factors (Wirth & Kolb 2004). Swanson (1992) distinguishes between three principal strategies of dealing with international theoretical diversity. A common possibility is called the avoidance strategy. Many international comparisons are made by teams that come from one culture or nation only. Usually, their research interests are restricted to their own (scientific) socialization. Within this monocultural context, broad approaches cannot be applied and “intertheoretical” questions cannot be answered. This strategy includes atheoretical and unitheoretical (referring to one national theory) studies with or without contextualization (van den Vijver & Leung 2000; Wirth & Kolb 2004).
The pretheoretical strategy tries to avoid cultural and theoretical bias in another way: these studies are undertaken without a strict theoretical background until results are to be interpreted. The advantage of this strategy lies in the exploration, i.e., in developing new theories. Although, following the strict principles of critical rationalism, because of the missing theoretical background the proving of theoretical deduced hypotheses is not applicable (Popper 1994). Most of the results remain on a descriptive level and never reach theoretical diversity. Besides, the instruments for pretheoretical studies must be almost “holistic,” in order to integrate every theoretical construct conceivable for the interpretation. These studies are mostly contextualized and can, thus, become rather extensive (Swanson 1992).
Finally, when a research team develops a meta-theoretical orientation to build a framework for the basic theories and research questions, the data can be analyzed using different theoretical backgrounds. This meta-theoretical strategy allows the extensive use of all data and contextual factors, producing, however, quite a variety of often very different results, which are not easily summarized in one report (Swanson 1992). It is obvious that the higher is the level of theoretical diversity, the greater has to be the effort for construct equivalence.
Research Questions
Van de Vijver and Leung (1996, 1997) distinguish between two types of research questions: structure-oriented questions are mostly interested in the relationship between certain variables, whereas level-oriented questions focus on the parameter values. If, for example, a knowledge gap study analyzes the relationship between the knowledge gained from television news by recipients with high and low socio-economic status (SES) in the UK and the US, the question is structure oriented, because the focus is on a national relationship of knowledge indices and the mean gain of knowledge is not taken into account. Usually, structure-oriented data require correlation or regression analyses. If the main interest of the study is a comparison of the mean gain of knowledge of people with low SES in the UK and the US, the research question is level oriented, because the two knowledge indices of the two nations are to be compared. In this case, one would most probably use analyses of variance. The risk for cultural bias is the same for both kinds of research questions.
Emic And Etic Strategies Of Operationalization
Before the operationalization of an international comparison, the research team has to analyze construct equivalence to prove comparability. In the case of missing internal construct equivalence, the construct cannot be measured equivalently in every country. The decision of whether or not to use the same instruments in every country does not have any impact on this problem of missing construct equivalence. An emic approach could solve this problem. The operationalization for the measurement of the construct(s) is developed nationally, so that the culture-specific adequacy of each of the national instruments will be high. Comparison on the construct level remains possible, even though the instruments vary culturally, because functional equivalence has been established on the construct level by the culture-specific measurement. In general, this procedure will even be possible if national instruments already exist.
As measurement differs from culture to culture, the integration of the national results can be very difficult. Strictly speaking, this disadvantage of emic studies only allows for the interpretation of a structure-oriented outcome with a thorny validation process. It has to be proven that the measurements with different indicators on different scales really lead to data on equivalent constructs. By using external reference data from every culture, complex weighting and standardization procedures can possibly lead to valid equalization of levels and variance (van de Vijver & Leung 1996, 1997). In research practice, emic measuring and data analysis is often used to cast light on cultural differences.
If construct equivalence can be assumed after an in-depth analysis, an etic modus operandi could be recommended. In this logic, approaching the different cultures by using the same or a slightly adapted instrument is valid because the constructs are “functioning” equally in every culture. Consequently, an emic proceeding should most probably come to similar instruments in every culture. Reciprocally, an etic approach must lead to bias and measurement artifacts when applied under the circumstances of missing construct equivalence.
It is obvious that the advantages of emic proceedings are not only the adequate measurement of culture-specific elements, but also the possible inclusion of, e.g., idiographic elements of each culture. Thus, this approach can be seen as a compromise of qualitative and quantitative methodologies. Sometimes comparative researchers suggest analyzing cultural processes in a holistic way without crushing them into variables; psychometric, quantitative data collection would be suitable for similar cultures only. As an objection to this simplification, one should remember the emic approach’s potential to provide the researchers with comparable data, as described above. In contrast, holistic analyses produce culture-specific outcomes that will not be comparable; the problem of equivalence and bias has only been moved to the interpretation of results.
Adaptation Of The Instruments
Difficulties in establishing equivalence are regularly linked to linguistic problems. How can a researcher try to establish functional equivalence without knowledge of every language of the cultures under examination? For a linguistic adaptation of the theoretical background as well as for the instruments, one can again discriminate between “more etic” and “more emic” approaches.
Translation-oriented approaches produce two translated versions of the text: one in the “foreign” language and one after retranslation into the original language. The latter version can be compared to the original version to evaluate the translation. Note that this method produces eticly formed instruments, which can only work whenever functional equivalence has been established on every superior level. Van de Vijver and Tanzer (1997) call this procedure application of an instrument in another language. In a “more emic” cultural adaptation, cultural singularities can be included if, e.g., culture-specific connotations are counterbalanced by a different item formulation.
Purely emic approaches develop entirely culture-specific instruments without translation. Two assembly approaches are available (van de Vijver & Tanzer 1997). First, in order to maintain the committee approach, an international interdisciplinary group of experts of the cultures, languages, and research field decides whether the instruments are to be formed culture-specifically or whether a cultural adaptation will be sufficient. Second, the dual-focus approach tries to find a compromise between literal, grammatical, syntactical, and construct equivalence. Native speakers and/or bilinguals arrange the different language versions together with the research team in a multistep procedure (Erkut et al. 1999).
Sampling
Usually, researchers use personal preference and accessibility of data to select the countries or cultures to study. This kind of forming of an atheoretical sample avoids many problems (but not cultural bias!). At the same time, it ignores some advantages. Przeworski and Teune (1970) suggest two systematic and theory-driven approaches. The quasiexperimental most similar systems design tries to stress cultural differences. To minimize the possible causes for the differences, those countries are chosen that are the “most similar,” so that the few dissimilarities between these countries are most likely to be the reason for the different outcomes. Whenever the hypotheses highlight intercultural similarities, the most different systems design is appropriate. Here, in a kind of turned-around quasi-experimental logic, the focus lies on similarities between cultures, even though these differ in the greatest possible way (Kolb 2004; Wirth & Kolb 2004).
Random sampling and representativeness play a minor role in international comparisons. The number of states in the world is limited and a normal distribution for the social factors under examination, i.e., the precondition of random sampling, cannot be assumed. Moreover, many statistical methods meet problems when applied under the condition of a low number of cases (Hartmann 1995).
Data Analysis And Interpretation Of Results
Given the presented conceptual and methodological problems of international research, special care must be taken over data analysis and the interpretation of results. As the implementation of every single variable of relevance is hardly accomplishable in international research, the documentation of methods, work process, and data analysis is even more important here than in single-culture studies. Thus, the evaluation of the results must ensue in additional studies. At any rate, an intensive use of different statistical analyses beyond the “general” comparison of arithmetic means can lead to further validation of the results and especially of the interpretation. Van de Vijver and Leung (1997) present a widespread summary of data analysis procedures, including structureand level-oriented approaches, examples of SPSS syntax, and references.
Following Przeworski’s and Teune’s research strategies (1970), results of comparative research can be classified into differences and similarities between the research objects. For both types, Kohn (1989) introduces two separate ways of interpretation. Intercultural similarities seem to be easier to interpret, at first glance. The difficulties emerge when regarding equivalence on the one hand (i.e., there may be covert cultural differences within culturally biased similarities), and when regarding the causes of similarities on the other. The causes will be especially hard to determine in the case of “most different” countries, as different combinations of different indicators can theoretically produce the same results. Esser (2000) refers to diverse theoretical backgrounds that will lead either to differences (e.g., action-theoretically based micro-research) or to similarities (e.g., system-theoretically oriented macro-approaches). In general, the starting point of Przeworski and Teune (1970) seems to be the easier way to come to interesting results and interpretations, using the quasi-experimental approach for “most similar systems with different outcome.” In addition to the advantages of causal interpretation, the “most similar” systems are likely to be equivalent from the top level of the construct to the bottom level of indicators and items. “Controlling” other influences can minimize methodological problems and makes analysis and interpretation more valid.
References:
- Erkut, S., Alarcón, O., García Coll, C., Tropp, L. R., & Vázquez García, H. A. (1999). The dual-focus approach to creating bilingual measures. Journal of Cross-Cultural Psychology, 30(2), 206 –218.
- Esser, F. (2000). Journalismus vergleichen: Journalismustheorie und komparative Forschung [Comparing journalism: Journalism theory and comparative research]. In M. Löffelholz (ed.), Theorien des Journalismus: Ein diskursives Handbuch [Journalism theories: A discoursal handbook]. Wiesbaden: Westdeutscher, pp. 123 –146.
- Esser, F., & Pfetsch, B. (eds.) (2004). Comparing political communication: Theories, cases, and challenges. Cambridge: Cambridge University Press.
- Hartmann, J. (1995). Vergleichende Politikwissenschaft: Ein Lehrbuch [Comparative political science: A textbook]. Frankfurt: Campus.
- Kohn, M. L. (1989). Cross-national research as an analytic strategy. In M. L. Kohn (ed.), Crossnational research in sociology. Newbury Park, CA: Sage, pp. 77–102.
- Kolb, S. (2004). Voraussetzungen für und Gewinn bringende Anwendung von quasiexperimentellen Ansätzen in der kulturvergleichenden Kommunikationsforschung [Precondition for and advantageous application of quasi-experimental approaches in comparative communication research]. In W. Wirth, E. Lauf, & A. Fahr (eds.), Forschungslogik und – design in der Kommunikationswissenschaft, vol. 1: Einführung, Problematisierungen und Aspekte der Methodenlogik aus kommunikationswissenschaftlicher Perspektive [Logic of inquiry and research designs in communication research, vol. 1: Introduction, problematization, and aspects of methodology from a communications point of view]. Cologne: Halem, 2004, pp. 157–178.
- Lauf, E., & Peter, J. (2001). Die Codierung verschiedensprachiger Inhalte: Erhebungskonzepte und Gütemaße [Coding of content in different languages: Concepts of inquiry and quality indices]. In E. Lauf & W. Wirth (eds.), Inhaltsanalyse: Perspektiven, Probleme, Potentiale [Content analysis: Perspectives, problems, potentialities]. Cologne: Halem, pp. 199 –217.
- Popper, K. R. (1994). Logik der Forschung [Logic of inquiry], 10th edn. Tübingen: Mohr.
- Przeworski, A., & Teune, H. (1970). The logic of comparative social inquiry. Malabar, FL: Krieger.
- Swanson, D. L. (1992). Managing theoretical diversity in cross-national studies of political In J. G. Blumler, J. M. McLeod, & K. E. Rosengren (eds.), Comparatively speaking: Communication and culture across space and time. Newbury Park, CA: Sage, pp. 19 –34.
- Vijver, F. van de, & Leung, K. (1996). Methods and data analysis of comparative research. In J. W. Berry, Y. H. Poortinga, & J. Pandey (eds.), Handbook of cross-cultural research. Boston, MA: Allyn and Bacon, pp. 257–300.
- Vijver, F. van de, & Leung, K. (1997). Methods and data analysis of cross-cultural research. Thousand Oaks, CA: Sage.
- Vijver, F. van de, & Leung, K. (2000). Methodological issues in psychological research on culture. Journal of Cross-Cultural Psychology, 31(1), 33 –51.
- Vijver, F. van de, & Tanzer, N. K. (1997). Bias and equivalence in cross-cultural assessment: An overview. European Journal of Applied Psychology, 47(4), 263 –279.
- Wirth, W., & Kolb, S. (2004). Designs and methods of comparative political communication research. In F. Esser, & B. Pfetsch (eds.), Comparing political communication: Theories, cases, and challenges. Cambridge: Cambridge University Press, pp. 87–111.