Participants frequently **drop out** of experiments whilst they are taking place/before they finish; something that is known as **experimental mortality** (or **experimental attrition**). There can be many reasons why participants drop out:

Death (most extreme)

No longer willing to take part

No longer available

Geographical move

Negatively impacted by the treatment condition (e.g. anger, apathy, frustration)

Experimental mortality becomes a threat to internal validity when the number of dropouts across the comparison groups (i.e., the treatment group and control group) is different. Imagine the following scenario:

Study #4

A comparison of two dieting regimes on weight loss

The experiment: We want to compare the effectiveness of two types of dieting regime (i.e., the independent variable), one decidedly more demanding/aggressive than the other, on weight loss (i.e., the dependent variable). Initially, participants of different levels of health status, ranging from healthy to obese are randomly assigned to the two dieting regimes (i.e., treatments A and B). Let's say that **treatment A** required more **self-discipline** because participants had to change their diet without any external help, whilst **treatment B** provided participants with regular check-ups (e.g., Weight Watchers) and professional counselling.

Experimental mortality: Therefore, whilst 96% of participants remained in the **treatment B** program, only 85% remained in the **treatment A** program. Note that this may not, in principle, become a significant threat to internal validity. However, if a much higher proportion of those participants that dropped out of treatment B were the more obese participants compared to those dropping out of treatment A (e.g., because they were less motivated, or they required more support/counselling to help them lose weight than the health individuals), the average (i.e., mean score) weight loss of the treatment B group at the end of the study could be lower than would have been expected. As a result, the difference in the scores on the dependent variable (i.e., weight loss) could not be explained solely by the application of the two different treatments (i.e., the independent variable), but also by **experimental mortality**. This becomes a threat to the internal validity of the results.

Experimental mortality is only likely to be a significant threat to internal validity if the experiment lasts a long time, since the potential for reasons for dropouts to occur increase (e.g., geographical move, apathy, problems of availability, etc.). This is especially likely to be the case if the treatment condition is particularly demanding (or different levels within the independent variable are more demanding than others), encouraging a particular section of the treatment group to dropout (e.g., those participants that may find the treatment more demanding than others: e.g., long-term dieting regime where those lower performing dieters, who find the regime more taxing than others, dropout at a greater rate than others in the treatment group). This may lead to a negative **selection effect**, where the treatment group and control groups no longer **match** based on those criteria used when **randomly assigning** participants to either the treatment group or control group (e.g., there are no longer so many obese individuals in the treatment group, who would ordinarily score lower on the dependent variable; i.e., an **under-representation problem**) [see the section: Selection bias and internal validity].

Dropout is only a potential threat to internal validity in the following scenarios:

Pre- and post-test designs; that is, in post-test only designs, there would only be a problem if a large number of participants dropped out of one group (i.e., either the treatment group or experimental group) before the single post-test. However, this is unlikely.

The dropout rate and make-up of participants that drop out is unequal between the treatment group and control group.

It is reasonable to assume that in some experimental and quasi-experimental research designs, there will be a difference in the dropout rate between the group that receives the treatment (i.e., the experimental group) and the group that does not (i.e., the control group). Note that we call the group that receives the treatment the **treatment group**, irrespective of whether we use an **experimental** or **quasi-experimental research design**. For example, a difference in dropout rates between the experimental and control groups may occur when the treatment is particularly demanding, whether physical, psychologically, in terms of time, or in some other way. In such circumstances, we would expect to see a greater dropout rate amongst members of the experimental group. Sometimes, quasi-experimental research designs do not involve a control group, but two or more treatment groups, each of which receives a different treatment. Again, greater dropout rates may be witnessed amongst the treatment(s) that were more demanding. Where such dropout rates are higher in one group compared to another, it becomes more difficult to conclude that the outcome measure (i.e., the dependent variable) is the result of the treatment (i.e., the independent variable) and not dropout rates.

In experimental research, we expect the independent variable to lead to a change in the dependent variable. For example, if we were interested in the impact of two different teaching methods (i.e., the independent variable) on the exam performance of students (i.e., the dependent variable), we would expect that the independent variable came before the dependent variable, and not the other way around. In other words, the exam performance of students (i.e., the dependent variable) cannot influence the two teaching methods (i.e., the independent variable). This does not make any sense. Instead, the two teaching methods could result in a change in the exam performance of students. However, there are rare occasions in research where this **sequence** (or **causal time order**) between the independent and dependent variable is **ambiguous**. In such cases, it can be uncertain whether the independent variable caused the change in the dependent variable, or whether the dependent variable caused the change in the independent variable. This may be because there is an **extraneous variable** (i.e., a third variable) involved that has not been measured, but is actually responsible for (or moderates) the changes in the independent and/or dependent variable [see the article: Extraneous and confounding variables].