An experimenter effect, which results in experimenter bias, can threaten internal validity across all types of experimental and quasi-experimental research design. Such an experimenter effect is typically unintentional, but arises because of (a) the personal characteristics of the researcher, which influences the choices made during a study; and (b) non-verbal cues that the researcher gives out that may influence the behaviour and responses of participants. Some of the more generic personal characteristics that may lead to bias include the experimenter's age, class, gender, race, and so forth. It can sometimes help to think about experimenter effects as relating to either implementation/research methods or directional hypotheses/personal biases. Each is discussed in turn:
Implementation / research methods
Despite a good research design, how an experiment is implemented can also affect the scores on the dependent variable in a way that does not reflect the treatment (i.e., the independent variable). This may cause a change in the scores on the dependent variable for either the treatment group or the control group (e.g., the lecturer with greater teaching ability being assigned to the control group).
Study #2 recap
For example, if you were interested in the impact of two different teaching methods, namely students receiving lectures and seminar classes compared to students receiving lectures only (i.e., your independent variable) on the exam performance of students (i.e., your dependent variable), you may also want to ensure that the lecturers/teachers involved in the study had a similar educational background (e.g., a teaching degree, a degree in the subject being taught, etc.), teaching experience (e.g., number of years teaching), and so forth.
The goal of such random assignment is to avoid the potential selection bias that can occur when the groups that are being compared are not similar before the research starts. Taking the example above, we may expect that students who not only received lectures, but also seminar classes, would perform better than those students who only received lectures. However, what if the lecturers who taught the group that only had lectures (and no seminar classes) were considerably more experience teachers, with a much stronger educational background than those lecturers that taught the group that had lectures and seminar classes. Whilst we may still expect the students who had both lectures and seminar classes to get higher exam marks than the students that only had lectures, but perhaps the difference in the exam marks are no longer significant. You would then no longer know if the difference in the exam marks (i.e., your dependent variable) was due to the differences in the teachers? ability (a source of bias) or the two different teaching methods (i.e., the independent variable).
Whilst the teachers/lectures may not be the experimenters, per se, in the sense that they don't have to be the researchers carrying out the study, they place a central role in the study, and could easily have a significant influence on the performance of the students. This creates a threat to internal validity.
Directional hypotheses / personal biases
In quantitative research, you often make predictions about the outcome of an experiment. These predictions may come in the form of directional hypotheses. We call something a directional hypothesis, rather than a non-directional hypothesis, because we making a prediction about the outcome of an experiment. For example: As physical activity increases, risk of heart disease decreases; As pay increases, employee motivation increases [see the section on Research (and null) hypotheses].
Seldom will you design an experiment thinking that nothing will happen, or having no idea about the potential outcome. For example, we think that a new teaching method will improve student exam performance, so we design an experiment to find out if this is the case; we think that introducing background music into a packing facility will increase employee task performance, so we design an experiment to test our directional hypothesis.
Since you, as the experimenter, may make such predictions, it is possible that certain personal biases will enter the research process. These personal biases are often exhibited in the experimenter's behaviour, which may include being more/less helpful/friendly/informative towards the different groups involved in the study in order to influence their behaviour. Whilst this may be an unconscious form of bias, it can lead to changes in the dependent variable that are not due only to the treatment (i.e., the independent variable), but also experimenter effects. If the measurement of the dependent variable is more qualitative, this may pose a more significant threat to internal validity (e.g., the experimenter makes the judgement of a student's performance on a scale of 1-10 instead of this measurement being less subjective, such as using a measurement device like a written test or behavioural scale).
Subject effects (or participant reactivity) occur when the way that participants behave in an experiment is different from the way that they would normally behave. These changes in behaviour reflect participants? knowledge that they are being studied, which may lead to them acting aggressively/defensively, cooperatively/uncooperatively, or in some other way that affects their score on the dependent variable. Participants may behave differently in order to mirror the behaviour that they think the researcher wants to see, or they may do it for their own reasons. Nonetheless, this behavioural modification can threaten the internal validity of the study because the way that participants reacted may explain the changes in the dependent variable rather than the treatment (i.e., the independent variable).
Subject effects are likely to be greater in staged experiments where it is particularly obvious that subjects are part of an experiment (e.g., a laboratory settings), compared with less staged environments where participants know that they are part of an experiment, but are not so "under the microscope".
Babbie, E. R. (2010). The Practice of Social Research, 12th edition. Cengage Learning.
Campbell, D. T. (1963). Social attitudes and other acquired behavioural dispositions. In S. Koch (Ed.). Psychology: A Study of Science (Vol. 6, pp. 94-172). New York: McGraw-Hill.
Campbell, D. T. (1969). Reforms as experiments. American Psychologist, 24: 409-429.
Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research on teaching. In N.L. Gage (Ed.). Handbook of Research on Teaching (pp.171-246). Chicago: Rand McNally.
Cook, T. D., & Campbell, D. T. (1979). Quasi-Experimentation: Design and Analysis Issues for Field Settings. Chicago: Rand-McNally, pp. 51-55.