The results of polls tell us how many people intend to vote for a certain political party, watch TV more than four hours a day, or favor a certain TV program. We call methods of collecting and analyzing such data “quantitative methodology” because individuals’ attributes are counted in large numbers. One can count not only single persons, but also propositions within texts, visual elements within pictures or sequences of film material, or observed actions and overt behaviors. Counting people, words, the duration of a TV program, and so on is deeply rooted in our everyday lives and nothing artificial. Quantitative methodology joins these basic social phenomena but works out systematic (methodological) rules to a complex pattern of standards.
Aims Of Quantitative Methodology
The main aim of quantitative methodology is comparison and measurement. To compare individuals, text units, or behaviors it is necessary to have a common basis as a starting point for comparison. Without commonalities or standards no comparisons can be made. The logic of quantitative methodology is basically the logic of standardization, which implies reducing context complexity around the research object in question. With the help of standardization it is possible to measure the attributes of research units. Measurement is related to the attributes of research units; research units, however, are not viewed in their (complex) entirety but reduced to variables that can be analyzed separately.
One can standardize either different features of the research process or the research process as a whole. Standardizing the entire research process means proceeding systematically and in a certain order of procedures. We start with research questions or with hypotheses referring to a specific research object. Research questions without hypotheses, which are potential answers to these questions, imply an explorative research strategy. If hypotheses, which can be deduced from abstract theories, are available, a confirmative research strategy will be preferred to test whether the hypotheses are right or false. Regardless of the research strategy being explorative or confirmative, a second step requires research method(s) that are adequate to answer the research questions or to test the hypotheses. Theoretical concepts must be translated into empirical indicators – a process we call operationalization. Furthermore, a sample of research units needs to be selected. The following process of data collection demands a direct contact and interaction between researcher and the research field explored. After collection, the data will be analyzed with quantitative (statistical) tools and interpreted in the context of the research questions or hypotheses. Each step of the research process is checked according to scientific rules to which a given scientific community agrees within a certain scientific context.
Methods And Instruments
We can now take a closer look at several steps of the research process and their particular standardization. The most relevant objects of standardization are the research instrument and the research situation. In polls and surveys people are questioned about their opinion toward certain social phenomena. The respondent’s opinion is treated as a variable that can be separated from other (measurable) attributes. The wording and the order of the questions are laid down in a questionnaire. The interviewer is admonished to follow the interviewing rules strictly and not to use his/her own words. The respondent has to fit his/her answers to preformed categories of answers, such as “strongly agree,” “somewhat agree,” “don’t know,” “somewhat disagree,” and “strongly disagree.” It is the respondent’s task to translate the mental representation of his/her opinion into the given (i.e., communicated) categories and to select the category that best fits his/her opinion. This procedure ignores the respondent’s own semantic representation of his/her opinion and the context of the cognitive process that forms this opinion, as well as other cognitive and communicative aspects related to the opinion in question, such as prototypical examples, narratives, etc. For the purpose of comparison the respondent’s answer is only of interest as the standardized semantic result of the preceded complex and context-related or context-bound cognitive processes of opinion formatting. Typical variables collected in surveys may be attributes (gender, age, income, etc.), opinions or attitudes (toward social phenomena, such as norms), knowledge (of persons, structures, processes, etc.), or behaviors (hours of TV watched, voting, etc.).
The research instrument of content analysis is called “code-book.” It includes a set of standardized semantic categories of attributes that are relevant with regard to the research questions, and which the coder searches within the documents (text, proposition, photograph, film, etc.). Coding categories can be formal elements (such as the length of an article, the position of an article within a newspaper, etc.), semantic variables (such as the theme of an article, actor-related categories, etc.), or pragmatic variables (such as the assessments of actors, organizations, arguments, etc.). Although coding is not an automatic process (with the exception of computer-aided content analysis), a deeper or hermeneutical understanding of the text or propositions within the text is not necessary for coding. The coder’s task is to assign textual or visual elements to the given categories. A coding manual or commentary contains the coding rules the coder has to apply to the document that has to be coded.
With regard to observational methods, the observer pays attention to overt behaviors or apparent attributes of observed people, which s/he records in a code sheet or a list with given categories. Other concomitants spontaneously observed are not of interest in the context of the standardized schedule. The coding process works analogously to the coding process of content analysis.
As well as the research instrument and the research situation, the complete research design and the process of data collection can also be standardized. Within experimental research designs independent variables are controlled with the help of manipulated stimulus material or the treatment of experimental subjects. In order to test causal effects, subjects are randomly allocated to experimental groups and/or a control group. The experimental group, for instance, is exposed to a horror movie to find out whether this stimulus causes anxiety reactions. The proof for the causal effect of the movie content on the media user’s emotional state either requires the comparison between a pre-test and a post-test measurement, or a comparison between the measurement of the experimental group after presenting the stimulus material and that of the control group, which is exposed to no stimulus material at all or to a neutral stimulus material.
In the first experimental design comparing two measurements of the same subjects, the experimental subjects should show more feelings or reactions of anxiety after viewing the horror movie than before viewing it to put the anxious response exclusively down to the movie and not to any other circumstances. This experimental design only proves the causal effect hypothesized if repeated measurement of the same subjects does not sensitize the subjects toward the measurement instrument, and consequently does not bias the measurement results. In the second experimental design comparing different subjects those subjects of the experimental group should show more feelings or reactions of anxiety than the subjects of the control group who viewed a neutral movie to put anxiety exclusively down to the reception of the horror movie and not to other attributes of the subjects or the research situation. This experimental design only proves the causal effect hypothesized if the stimulus materials of the experimental group and the control group clearly differ and if both groups are comparable (identical) with regard to other relevant subjects’ attributes.
There are further research designs in quantitative and standardized methodology, such as trend or panel designs, which are often applied in polls on voting behavior or on political attitude change. In trend studies the same research instrument is used for different samples at different points of time. Points of time can be chosen very closely (e.g., daily) and data can be analyzed with the help of time-series statistics. As the samples differ at each point of time, only aggregated differences can be measured. To measure individual differences, panel designs should be applied. In panel designs, standardized measurements can be repeated several times for the same sample, but not as often as in trend designs because subjects become sensitized to the research instrument and bias their responses. Furthermore, sample size in panel designs continuously diminishes because of “panel mortality,” i.e., respondents’ decreasing cooperation in participating in the survey. As a result, the sample may be biased or become too small for complex statistical analysis.
Logic Of Sampling Procedure And Sampling Technique
Quantification of data also has consequences for sampling procedures because quantitative methodology requires sufficiently large samples (of respondents or text units) for data analysis. The general aim of sampling is the generalization of the results. The most far-reaching kind of generalization is a representative sample, which means that the distribution of relevant variables in the sample corresponds to the distribution of these variables in the total population. A random sampling technique tries to maintain the chance of getting into the sample (approximately) equal for every unit (respondent, text, observed person, etc.). Although the probability of getting into the sample is low if the total population is large, it will never be zero. The procedure of random sampling needs practical conditions that make sure the distributions within the sample are representative of the distributions within the total population. If some relevant parameters of the total population are known, a quota-sampling is also possible. The distributions of these variables in the total population serve as quota instructions for the sampling procedure, which makes sure that the distributions in the sample represent the distributions in the total population. Other sampling procedures and techniques vary according to the research question or the data-collection method applied. In experiments, for instance, samples need not be representative, but should be selected at random; this is important for the statistical testing of the hypotheses.
All steps taken to standardize the process of data collection reduce the complex information about individuals or text units to separate variables and categories, abstracted from their context. With the help of statistical data analysis we can now count the number of individuals or units with regard to the collected variables (univariate statistics) and we can correlate the variables (bivariate statistics). Information about the sample distribution is expressed as means and standard deviation of variables; information about the correlation of variables can be documented in cross-tabulations or mean comparisons between group variables (such as gender, categories of age, education, etc.).
It is also possible to test complex theoretical models with many variables (multivariate statistics). Several statistical tools discover structures within the data (e.g., Factor Analysis; Cluster Analysis; Discriminant Analysis); others use a confirmative logic to test models (e.g., variance of analysis). The complexity lost or reduced within the process of data collection by standardizing the variables and neglecting the context of variables can partly be re-established by analyzing the correlation between the separately collected abstract variables. The analytical outcome may be a complex relationship between variables, which enables the researcher to reconstruct context.
Evaluative Criteria For Quantitative Research
Quantitative methodology is based on certain criteria to assess its quality. These criteria are objectivity, validity, and reliability. The notion of “objectivity” is somewhat misleading because it does not imply truth in an epistemological sense. Instead, objectivity is related to the research process and procedures. Research procedures have to be systematically planned and carried out. Researchers have to bear in mind the scientific rules that the scientific community agrees on. In that sense, objectivity has a normative aspect and is therefore replaced by the notion of “intersubjectivity”.
The same is true for validity: a research instrument is valid if it measures what it claims to measure. Validity is a criterion to assess the relationship between theoretical concept and empirical indicator. To measure the relevance of media coverage on a certain theme with the help of the position of articles in a newspaper makes sense because front-page news is considered more relevant than articles placed at the back of the newspaper. Although this example seems obvious, the rule to assess validity follows a circular logic, as validity can only be ascertained within scientific discourse and not with the help of a formal procedure.
In contrast to validity, reliability is a formal criterion, which can be mathematically calculated as a coefficient of agreement. Reliability is related to the stability of the research instrument. A research instrument is reliable if its repeated use does not change the outcome. In content analysis the coding scheme is reliable if different coders use it in the same way with the same coding results (inter-coder-reliability) or if the same coder uses it the same way at the beginning of the coding procedure as at its end (intra-coder reliability). The same is true for observation with the exception that an observed situation cannot be reobserved (if it is not recorded). Therefore, intra-coder-reliability can only be simulated by observing similar situations. In surveys reliability is related to similar instruments (questions in the questionnaire). If different questions that are supposed to measure the same construct lead to similar or equal answers, these questions are considered reliable.
In sum, quantitative methodology is characterized by a relationship between standardization of research instruments, research situations, and research designs, quantification of analysis, generalization of results gained from samples, and systematizing research procedures. Research is a technical and rule-based process. The underlying premise says that standardizing method and research procedure creates a common basis of preconditions, which allows for comparison across different research objects. As a consequence, the research object is measured in a standardized way and can only be explored in terms of standardized and quantified variables. It is cut off from its individual context but it can be analyzed systematically. Furthermore, standardization is the only way to analyze data quantitatively. To collect a sizeable amount of data is of immense value for the generalization of empirical results because the probability of variety within data increases – even if not necessarily in a linear fashion – with the number of objects researched.
Critique And Defense
For the reasons mentioned above researchers who prefer qualitative methodology object to quantitative methodology, and claim that the complexity lost within the process of data collection neglects the problem of understanding and interpretation. A respondent’s selection of a certain category from a given set of categories would not imply that she or he understands the category in the same way as the researcher does or other respondents do. Selecting a certain category by interpreting it as the best fit for one’s own opinion would be only a vague indicator for the “true” value of the opinion in question. Using standardized data-collection methods would only catch the researcher’s presumption of what a question or a category means. Whether the meaning insinuated by the researcher fits the authentic meaning of the “research object” or not could not be technically assessed.
From a meta-theoretical perspective it is possible to reconstruct the communication model that underlies quantitative or standardized methodology: it is a stimulus–response model because quantitative methodology supposes that standardizing the meaning of the research stimulus (questions in surveys, categories in content analysis and in observation) causes the standardized understanding of the research object (respondents in surveys), of the coder (in content analysis), or of the observer (in observation).
Advocates of quantitative methodology, however, use the premise of nonsystematic or random error that accompanies the process of measurement. Although a standardized instrument (questionnaire, code-book, observation schedule) is not able to represent the “true” value itself, it is a statistically probable estimation of it. Every bias (deviation from the “true” value) is less probable than the collected value, unless the measurement has been carried out correctly. Systematic errors only occur if and when attributes of the data-collection process itself interfere with attributes of the object (respondent, text, observed person).
Critics may again object that interferences between the measurement process and the measured attribute cannot be avoided and are typical of social phenomena, as the process of data collection is a social process itself. From a constructivist epistemological viewpoint it is indeed not possible to separate both kinds of “realities” (data collection and data itself). Constructivists claim that we should not even speak of (measured) data but of (constructed) facts, as measurement can be considered a social process following and violating the methodological rules developed and declared valid by the scientific community.
We need not solve this problem of epistemological argumentation here because it is more fundamental than the alternative between a quantitative and qualitative methodology. As a consequence of constructivist objections we should not interpret validity in an ontological but in a pragmatic sense. The “correct” use of data-collection and data-analysis tools is not an indicator of truth or approximation to reality but a pragmatic consensus within a scientific community in a given social and historical context. Again, the argumentation is based on standardization: if all researchers use the tools of data collection in the same way, the results can be compared and discussed. If the rules change and other research instruments are developed, they should be compared with the old ones either to find a new agreement among the scientific community or to compete for the better way.
- Adèr, H. J. (ed.) (1999). Research methodology in the life, behavioural and social sciences. London, Thousand Oaks, CA, and New Delhi: Sage.
- Creswell, J. W. (1994). Research design: Qualitative and quantitative approaches. London, Thousand Oaks, CA, and New Delhi: Sage.
- Kaplan, D. (ed.) (2004). Handbook of quantitative methodology for the social sciences. London, Thousand Oaks, CA, and New Delhi: Sage.
- Punch, K. F. (2005). Introduction to social research: Quantitative and qualitative approaches, 2nd edn. London, Thousand Oaks, CA, and New Delhi: Sage.
- Ragin, C. C. (1994). Constructing social research: The unity and diversity of method. Thousand Oaks, CA: Pine Forge.
- Somekh, B., & Lewin, C. (eds.) (2005). Research methods in the social sciences. London, Thousand Oaks, CA, and New Delhi: Sage.