Experimental research (ER) is a type of research designed to test hypotheses under controlled conditions using experimental methods. In this type of research, one or more independent variables (IV) are manipulated by the researcher (as treatments), subjects are randomly chosen and assigned to different treatment levels, and the results of the treatments on the dependent variables (DV) are observed to examine a cause-effect relationship between the two variables.
ER has been defined in various ways, but there are several obligatory features that all definitions share in common. First, it involves a DV and an IV. A variable, as the term itself suggests, is anything that does not remain constant; it can be defined as an attribute of a person, a piece of text, or an object that “varies” from person to person, text to text, object to object, or from time to time (Lazaraton, 1991). To understand how the variables in a study relate to one another, researchers first identify their functions and label them as dependent or independent (Lazaraton, 1991). The IV refers to the variable that the experimenter expects to influence the other variable—the dependent variable. After completing experimental research, a correlation between the two variables being studied is either supported or rejected. Researchers conclude this correlation by manipulating the IV (the treatment) in one group (the experimental group) and keeping it constant in another group (the control group); this way, any change in the DV can be attributed to the IV. Still, before examining this cause-effect relationship, the researcher first needs to completely eliminate all other sources of variation or influence (confounding variables), randomly select subjects from the population (random sampling), and- again-randomly assign these subjects to different comparison groups (random assignment).
There are three types of ER determined by the way the researcher assigns subjects to different conditions and groups. True experimental research—the first type—is the most accurate type of experimental design and may be carried out with or without a pretest on at least two randomly assigned groups. It can be classified into three designs: the posttest-only design in which the subjects of the two groups (experimental and control) are observed and post-tested after the experimental group (EG) receives the treatment, then a conclusion is drawn from the difference between these groups; the pretest-posttest design which involves pre-testing and post-testing measurements in both groups before and after the treatment of the EG; the Solomon four-group design which combines the previous two designs. The second type of ER—quasi-experimental research—is used in settings where randomization is difficult or impossible (most frequently in applied research). This type allows more control over IV(s) and involves manipulation but lacks randomness in subject selection. The third—the non-experimental research allows the least amount of control because they deal with phenomena occurring in natural environment. This type tries to determine correlations among variables without manipulating them (Ball, 2019), and it can be further classified into descriptive/exploratory survey studies and inter-relationship/difference studies (Ball, 2019).
The credibility of the ER is evaluated in terms of two criteria: the research must be both reliable and valid. Reliability refers to the consistency of results, and it can be estimated by comparing different versions of the same measurement across time- variation in measurements of the same person on the same test on different occasions but under identical conditions (test-retest reliability); across researchers- variation in measurement across different raters or observers (interrater reliability); across items- variation in measurements from different items of a test that are designed to measure the same thing (internal consistency reliability); and across methods- variation in measurements obtained by different methods or instruments (intermethod/parallel forms reliability).
Validity on the other hand corresponds to truthfulness or appropriateness; research that satisfies this requirement will yield results that are comparable to similar samples and will eventually lead to interpretations that can be applied to similar populations or situations (Ball, 2019). Researchers assess the validity of a study in two ways. The first one is the test validity and it includes questions about face validity (do the items of a test appear to measure what the study claims to measure), construct validity (do the items measure the concept they intend to measure), content validity (are the items fully representative of the concept being measured), and criterion validity (do the results of a test correspond to other valid measures of the same concept). Besides, the validity of a study is also assessed through experimental validity which is concerned with the procedure itself and interpretation of results, (Ball, 2019). It is split into two types: internal validity and external validity. The former refers to the degree of confidence that the causal relationship being tested is trustworthy and not influenced by other factors or variables; the threats that jeopardize this type of validity include history, maturation, instrumentation, attrition (mortality), diffusion, and others. The latter—external validity—refers to the extent to which results from a study can be applied (generalized) to other situations, groups or events.
In conclusion, despite its numerous advantages- straightforward determination of causal relationships, possibility of verifying results through repeatability, and the opportunity to create conditions that are not easily observed in natural settings (Ball, 2019); ER, like any other research design, has its limitations. These include its unnaturalness, ethical considerations, difficulty of generalization, and impracticality to all types of research problems (Ball, 2019).
References:
Corey, M. Stephen (1953). Action Research to Improve School Practices. New York: Bureau of Publications, Teachers College, Columbia University.
Fraenkel, J. R., Wallen, N. E., & Hyun, H. H. (2011). How to Design and Evaluate Research in Education. 8th edition. Boston: McGraw – Hill.
Henson, K. T. (1996). Teachers as Researchers. In J. Sikula (ED.), Handbook of Research on Teacher Education. New York: Macmillan.
Lewin, Kurt (1946). Action Research and Minority Problems. Journal of Social Issues.
Sagor, R. (2004). The Action Research Guidebook: A Four Step Process For Educators and School Teams. Thousand Oaks, CA: Sage.