As a concept, internal validity is important because we want to be able to say that the conclusions we made in our dissertation accurately reflect what we were studying. For example, if we conclude that exercise reduces heart disease, we want to make sure that we can say this with as much certainty as possible, confident in the knowledge that what we studied, and not other factors, explains our results.
Internal validity is something that can affect dissertations that are guided by a quantitative, qualitative or mixed methods research design [see the section of Research Designs if you are unsure which research design your dissertation follows]. However, if your dissertation was guided by a qualitative research design, the idea of internal validity is often referred to as dependability, and whilst similar to internal validity, is not the same.
In quantitative research designs, the level of internal validity will be affected by (a) the type of quantitative research design you adopted (i.e., descriptive, experimental, quasi-experimental or relationship-based research design), and (b) potential threats to internal validity that may have influenced your results. In this article, we (a) explain what internal validity is, and (b) discuss and provide examples of the various threats to internal validity.
Broadly speaking, there are four types of quantitative research design: descriptive, experimental, quasi-experimental and relationship-based research designs [see the articles, Research Designs, to learn more]. Experimental research designs, also known as intervention studies, provide the greatest warrant (i.e., support) for knowledge claims (e.g., all sheep are not black, exercise reduces heart disease, etc.) because they can make the claim that X, the independent variable, causes Y, the dependent variable. By using the word causes, we mean that the independent variable (X) leads to a change in the dependent variable (Y) [see the article, Types of variables, if you are unsure about the different between independent and dependent variables]. For example, if an experimental study found that students that turn up to seminars in addition to lectures get better marks than those student that only turn up to lectures, we may argue that seminar attendance (i.e., the independent variable) increases (i.e., causes an increase in) exam performance (i.e., the dependent variable).
This idea that X causes Y is important because internal validity is about being able to justify that X actually caused Y. We highlight the word actually because there are many different reasons that can make it difficult to known whether X causes Y. We may think that X causes Y; in other words, we may assume that X causes Y. But we cannot say with certainty that this take place. This reflects the fact that there are many threats to internal validity that can undermine our results, which are discussed in the next section [see the section: Threats to internal validity]. It also reflects the fact that there are different types of quantitative research design (i.e., descriptive, experimental, quasi-experimental and relationship-based research designs), which can make us more or less confident that our conclusions are internally valid.
Dissertations can suffer from a wide range of potential threats to internal validity, which have been discussed extensively in the literature (e.g., Campbell, 1963, 1969; Campbell & Stanley, 1963; Cook & Campbell, 1979). In this section, 14 of the main threats to internal validity that you may face in your research are discussed with associated examples. These include history effects, maturation, testing effects, instrumentation, statistical regression, selection biases, experimental mortality, causal time order, diffusion (or imitation) of treatments, compensation, compensatory rivalry, demoralization, experimenter effects and subject effects. In the sections that follow, each of these threats to internal validity are explained with accompanying examples.
History effects refer to events that happen in the environment that change the conditions of a study, affecting its outcome. Such a history event can happen before the start of an experiment, or between the pre-test and post-test. To affect the outcome of an experiment in a way that threatens its internal validity, a history effect must (a) change the scores on the independent and dependent variables, and (b) change the scores of one group more than another (e.g., increase the scores of the treatment group compared with the control group or a second treatment group). If you are unsure about some of these core aspects of experimental designs, you may want to first read the article: Experimental research designs.
To understand more about history effects, consider their following characteristics: the timing of history effects, the length of a study and the magnitude of history effects. Each of these characteristics is examined in turn: