Measures and Psychometrics
What should you produce?
- A written description of the measures you’re going to use. In most cases, this is going to be 2-3 pages, but may be a bit longer if you’re using lots of measures.
- Your description needs to explain what each measure is for (how’s it tied to your research question).
- Your description needs to evaluate the psychometric evidence for the measures your using. This includes both reliability and validity evidence. The reliability evidence should address the kind of reliability needed in the study (e.g., if it is a multi-item measure used at 2 time points, I want to hear about evidence for internal consistency and test-retest reliability). For validity, be sure to cover as much as you can, especially construct validity. (Make sure when you read about the reliability and validity that you consider the subjects and procedures used to establish the reliability and validity).
Steps to take
- Make a list of possible measures for the constructs of interest in your study. Do this even if you are using an existing dataset. My goal here is for you to be thoughtful and have a good understanding of possibilities. I’m not going to make you get a specific number. This is your project and you’ll have to defend your choices.
- Provide pros and cons regarding the measures. This should include limitations of the measures. Think in terms of threats to validity as well as psychometrics.
- Do a literature review regarding the psychometric properties of your chosen measures. For the class, I want you to find at least 1 paper that has investigated reliability and validity (they can be separate papers or the same paper). However, see my requirements for your paper. You need to provide enough documentation that you cover the necessary parts of reliability and validity.
- Write your measures section.
Parts of the Workshop
Complete the relevant steps prior to the due date. See Learning Suite for the due dates.
- Workshop 3a: Steps 1-3
- Worskhop 3b: Step 4
Feedback to Colleagues
These are suggestions – they aren’t the only the things you should consider when evaluating your colleagues’ work. Be thoughtful and remember to provide actionable feedback.
- Have your colleagues considered viable measures?
- Have your colleagues given sufficient description of the measures that you understand how they work and how they are administered? Things like self-report vs clinician-administered as well as how the tests are scored. Have your colleagues been clear about how they will use the measure (just a sub scale or a total score — how will the scores be produced?).
- Have your colleagues provided reliability and validity evidence for each of their measures? Were they specific about the reliability and validity studies (how was the reliability and validity established, what subjects were used, etc.). Have your colleagues provided reliability and validity evidence for biological measures (e.g., EEG, fMRI, biofeedback measures)?
- If your colleagues are using a measure that lacks reliability and validity evidence, have they provided justification for why they use that measure and why it is worth it? Are you convinced by their argument?
- Finally, have your colleagues specified what are the key outcome measures? In other words, do you understand what measures will provide evidence regarding their research question? (And which measures are descriptive or covariates?)
What will you turn into to me?
Turn in your completed measures section (3a + 3b) that takes into account your colleagues’ feedback. You should also include the comments your received from colleagues as well your brief summary of how you dealt with their feedback.