Identify Designs and Their Weaknesses. Campbell and Stanley define internal validity as the basic requirements for an experiment to be interpretable - did the experiment make a difference in this instance? External validity addresses the question of generalizability - to whom can we generalize this experiment's findings? Eight extraneous variables can interfere with internal validity:. History , the specific events occurring between the first and second measurements in addition to the experimental variables.
Maturation , processes within the participants as a function of the passage of time not specific to particular events , e. Testing , the effects of taking a test upon the scores of a second testing.
Instrumentation , changes in calibration of a measurement tool or changes in the observers or scorers may produce changes in the obtained measurements. Statistical regression , operating where groups have been selected on the basis of their extreme scores.
Selection , biases resulting from differential selection of respondents for the comparison groups. Experimental mortality , or differential loss of respondents from the comparison groups.
Selection-maturation interaction , etc. Four factors jeopardizing external validity or representativeness are:. Reactive or interaction effect of testing , a pretest might increase. Interaction effects of selection biases and the experimental variable. Manipulation in this sense is similar to the definition of politics--who gets what. If the researcher decides who gets what, then manipulation occurred.
In the example, the researcher randomly assigned students to one of two groups, so the researcher manipulated who would receive which treatment, cooperative learning or lecture. The third requirement, well, more of a characteristic than a requirement, is that groups are compared. In most experiments, there will be at least two groups, perhaps more, which will be compared on some outcome of interest, some dependent variable. In the example, the two groups are cooperative learning and lecture, and they will be compared on performance on the final achievement test.
Quasi-experimental research is just like true experimental with the only difference being the lack of randomly formed groups. Of the two types of experimental research, quasi-experimental is most commonly used in education. It is difficult to find schools that will allow a researcher to select students from classes and assign them randomly to other classes. So, in most educational research situations, intact classes are used for the experiment. When intact classes or groups are used, but manipulation is present--the researcher determines which group receives which treatment--then quasi-experimentation results.
For example, a researcher uses his to classes for an experiment. He randomly assigns cooperative learning to class B, and randomly assigns lecture to class A. Following the treatment, an instrument is administered to all participants to learn whether the treatments resulted in differences between the two classes. Note in this example the groups were not randomly formed, but the treatment was manipulated and groups were compared, so quasi-experimentation resulted.
Ex Post Facto and Correlational. Both true and quasi-experimental research are distinguished by one common characteristic: No other type of research has manipulation of the independent variable. Two other forms of quantitative research, which are not experimental due to lack of manipulation, are ex post facto sometimes called causal-comparative and correlational. Often both of these types are grouped into what researchers call non-experimental research or simply correlational research.
Thus, correlational research can be understood to include both of the two types I discuss below: For our purposes, we will make a distinction between these two types. Ex post facto looks like an experiment because groups are compared; there is, however a key difference--no manipulation of the independent variable.
With ex post facto research, the difference between groups on the independent variable occurs independent of the researcher. These differences already exist, and their impact on the outcome is identified by comparing groups. Causal-comparative designs can have different foci: When do we use the design? Causal research uses different terms: For this, the researcher will have either to find a population on which the data are available, or to find an already existing appropriate group.
While both designs are non-experimental. Type of problem appropriate for this design — The type of problem that this design addresses. This type of design has some similarities with the correlational design.
Ex post facto research is ideal for conducting social research when is not possible or acceptable to manipulate the characteristics of human participants. It is a substitute for true.
Ex post facto study or after-the-fact research is a category of research design in which the investigation starts after the fact has occurred without interference from the researcher.
Ex post facto design is a quasi-experimental study examining how an independent variable, present prior to the study in the participants, affects a dependent variable. A quasi-experimental study simply means participants are not randomly assigned. Causal research uses different terms: ex post facto studies gather data retrospectively (e.g. given the obvious effects of smoking, the researcher will look in the past to find the potential cause), causal comparison where data are gathered from pre-formed groups and the independent variable is not manipulated in the experiment. For this, the researcher will have either to find a population on which .
An ex post facto research design is a method in which groups with qualities that already exist are compared on some dependent variable. Also known as "after the fact" research, an ex post facto design is considered quasi-experimental because the subjects are not randomly assigned - they are grouped. ex post facto research include the failure to recog- nize the limitations of the design, the failure to realize that an experimental approach is possible.