In all approaches to Route C: Extension, you need to think carefully about all aspects of research quality - internal validity, external validity, reliability and construct validity - because the changes that you make to the research strategy of the main journal article - whether some aspect of the research design, research methods and measures, or sampling strategy - can significantly affect the quality of your findings. This reflects the extra level of originality and independent thought that goes into all approaches to Route C: Extension (i.e., compared with Route A: Duplication and Route B: Generalisation), especially design-based extensions and method or measurement-based extensions, but also population and context/setting-based extensions. These different types of extension are discussed in turn:
Population and context/setting-driven extensions
Population or context/setting-driven extensions require changes to the measures used within the research methods (e.g., the questions in the survey or the observation points in the structured observation) that were applied in the main journal article when studying the new population or context/setting in your dissertation. This is necessary because population or context/setting-driven extensions require you to add, modify or omit certain constructs and/or variables from the original study to reflect the differences in the characteristics of the new population or context/setting that you are studying. Such changes have a number of implications for the quality of your findings:
Reliability and construct validity:
Since you are changing aspects of the measurement procedure used in main journal article, you need to make sure that the new measurement procedure remains reliable (NOTE: Testing for the reliability of a measurement procedure is something that we show you how to do in the Data Analysis part of Lærd Dissertation). Unfortunately, you will not know whether this is the case (i.e., whether adding, modify or omitting certain constructs or variables from your measurement procedure has reduced or improved the reliability of the measurement procedure in your dissertation compared to the one used in the main journal article), until you have collected the data and analysed it. Nonetheless, since your measurement procedure must be reliable before your findings can be valid, it is an essential part of assessing research quality. In addition to reliability, you need to assess the construct validity of the measurement procedure. To do this, you will need to read up on content validity, convergent and divergent validity, and criterion validity, all of which need to be taken into account when making changes to the measurement procedure.
Internal and external validity:
When considering the potential research quality of your research strategy, you need to focus on eliminating potential threats to the internal validity and external validity of your dissertation that are, to some extent at least, in your control, and which you can minimise in the way that you plan your data collection phase and carry it out. We are not talking about threats to internal and external validity that may result from the research design or research methods that were used in the main journal article (e.g., threats to internal validity such as testing effects, instrumentation, causal time order, etc.), or threats to external validity (e.g., those relating to methods and confounding, or ?real world? versus "experimental world" differences, etc.). Rather, we are talking about threats that you do have the potential to control and reduce, whether these are threats to internal validity that are controllable (e.g., threats such a selection biases, diffusion of treatments, compensation, etc.), or those relating to external validity (i.e., especially construct issues and selection biases). Minimising such threats will help you to argue that your results accurately reflect what you were studying, reducing potential criticism that other factors (e.g., selection biases) explain your results, as well as strengthening the case you have to make generalisations from your sample to some wider population (or across population and contexts/settings).
As you might have noticed, reducing selection biases when creating your sample (i.e., sampling biases) is particularly important to improving the research quality of dissertations that follow a population or context/setting-driven extension within Route C: Extension. As discussed in STEP FOUR: Sampling strategy, this is because the characteristics of the new population or setting/context that are important are likely to be different from those characteristics that were important in the main journal article. As a result, you have to rely less on the sampling strategy used in the main journal article and focus more on the population and setting/context that you are interested in. Therefore, when creating your sample in a dissertation following a population or context/setting-driven extension within Route C: Extension, you have to think: (a) what the most important characteristics of your sample are; and (b) make sure that the sample you select is as representative of the population you are interested in as possible (i.e., if you have to use a non-probability sampling technique rather than a probability sampling technique when creating your sample, you already know that the internal validity and external validity of your findings are being reduced because the representativeness of your sample is being threatened). However, since you have made changes to the measures used within the research methods, something that you don't do in Route B: Generalisation, you also need to make sure that the changes you have made do not make your measurement procedure do not reduce its reliability below acceptable levels (NOTE: We explain what are considered acceptable levels in the Data Analysis part of Lærd Dissertation).
Design, method and measurement-driven extensions
Design-based extensions require changes to the research designs used in the main journal article, whilst method-driven extensions involves changes to the research methods and measurement-driven extensions to the measures used within the research methods (e.g., the questions in the survey or the observation points in the structured observation). Of course, your dissertation may involve a combination of design, method or measurement-based extensions. As such, we deal with the broader implications of assessing research quality across these types of extension-based dissertations:
Internal and external validity:
When you change the research design, research methods or measures, or sampling strategy used in the main journal article, this can have significant implications for the internal validity and external validity of your findings. Unlike in dissertation that follow Route A: Duplication and Route B: Generalisation, you have much greater influence and control over factors that could reduce or increase threats to both internal validity (e.g., threats such as maturation, testing effects, causal time order, diffusion of treatments, experimenter and subject effects, etc.) and external validity (e.g., threats such as selection biases, as well as those that constructs, methods, and confounding, and the "real world" versus the "experimental world"). Minimising such threats will help you to argue that your results accurately reflect what you were studying, reducing potential criticism that other factors (e.g., selection biases) explain your results, as well as strengthening the case you have to make generalisations from your sample to some wider population (or across population and contexts/settings).
Reliability:
Since you are changing more than mere aspects of the measurement procedure used in main journal article when taking on a method or measurement-driven extension (i.e., compared to a population or context/setting-driven extension), you need to make sure that the new measurement procedure is reliable (NOTE: Testing for the reliability of a measurement procedure is something that we show you how to do in the Data Analysis part of Lærd Dissertation). Unfortunately, you will not know whether this is the case until you have collected the data and analysed it. Nonetheless, since your measurement procedure must be reliable before your findings can be valid, it is an essential part of assessing research quality, especially in method or measurement-driven extensions.
There are a number of types of reliability that you may need to consider when assessing the main journal article, depending on whether the measurement procedure involved: (a) successive measurements (i.e., in experimental and quasi-experimental research designs where there is a pre-test followed by a post-test); (b) simultaneous measurements by more than one researcher (i.e., if you are not collecting data alone, but using other friends/researchers); and/or (c) multi-measure procedures (i.e., where a number of variables are used to measure a single construct such as employee stress or organisational commitment, but no single variable can explain such a complex construct). You can learn more about these types of reliability in the article: Reliability in research.
Construct validity:
In addition to reliability, you need to assess the construct validity of the measurement procedure. To do this, you will need to read up on content validity, convergent and divergent validity, and criterion validity, all of which need to be taken into account when making changes to the measurement procedure.
Content validity is the extent to which the elements (e.g., questionnaire items, coding criteria, participant instructions, etc.) within a measurement procedure (e.g. a survey, structured observation, structured interviews) are relevant and representative of the construct that they will be used to measure. Establishing content validity is a necessarily initial task in the construction of a new measurement procedure (or revision of an existing one). However, the validity (e.g., construct validity) and reliability (e.g., internal consistency) of the content (i.e., elements) selected should be tested before an assessment of content validity can be made [to learn more, see the article: Content validity].
At the undergraduate and master's level, it is not uncommon for the elements that are included in a measurement procedure (e.g., for a survey, such elements could be the questions, coding criteria, participant instructions, etc.) to be selected based on less than scientific/rigorous methods (i.e., assessing a measurement procedure based on such superficial methods is known as face validity). However, not only are dissertations heavily criticised for this, but one of the benefits of drawing on published research is that measurement procedures should already be more carefully considered.
If you are adopting a method-based extension and are using two measurement procedures (e.g., two research methods such as structured observation and a survey) to measure the same construct (e.g., anger, depression, motivation, task performance), you can assess the construct validity of these measurement procedures using convergent validity and divergent validity. If you are using a well-established measurement procedure (e.g., a 42-item survey on depression) as the basis to create a new measurement procedure (e.g., a 19-item survey on depression) to measure the construct you are interested in, you can assess the criterion validity of the new measurement procedure (i.e., you do this in terms of either the concurrent validity or predictive validity of your measurement procedure).
One of the overarching themes of design, method and measurement-driven extensions is that they require a lot more thought when it comes to assessing and ensuring the quality of the findings that are generated. Not only are there additional considerations when it comes to reducing threats to internal and external validity, but you have to focus a lot harder on ensuring the construct validity and reliability of your measurement procedure. At the undergraduate and master's level, you will not get this completely right; academics don't either. The main goal is for you to set a research strategy where the research quality of your findings is taken into account, and maintained as best as possible.
At the end of the day, the better you understand the weaknesses in your research strategy, the easier it will be to either overcome these, or manage them as best as possible. Remember that all research has limitations, which negatively impact upon the quality of the findings you arrive at from your data analysis. The best way to recognise these and overcome them is to get a good understanding of the five ways through which research quality is assessed; that is, based on the internal validity, external validity, construct validity, reliability and objectivity of the research [see the Research Quality section of the Fundamentals part of Lærd Dissertation to learn about these terms]. Reflecting on these will help you to reduce threats to internal and external validity, and improve the reliability and construct validity of your measurement procedure.