HSK 861 - Measuring Individual Differences
Course Description
Researchers in the academic and private sectors often need to measure some aspect of people’s psychology be it their attitudes, satisfaction, motivation or intentions. We assume that the numbers these scales, questionnaires, tests and surveys produce are meaningful: that someone with a higher satisfaction score is in fact more satisfied than someone with a lower score. Because scale scores are used to make decisions like how to measure critical outcomes in a research study, develop a product, or admit a student or promote an employee, researchers need to thoroughly evaluate their validity. This short course will cover how to develop, evaluate, and refine scales using modern psychometric methods.
Summary
This course will be helpful for researchers in any field —including psychology, sociology, education, business, human development, social work, public health, communication and others that rely on social science methodology —who want to develop and use scales to measure psychological attributes. Learners should have background knowledge in introductory statistics topics such as univariate statistical tests, descriptive statistics, and correlation. Ideally learners should be comfortable with multiple regression techniques. Though proficiency in a specific software isn’t required, ideally participants will have some familiarity with running analyses using some type of statistical software (e.g., R, SPSS, SAS, STATA).
Course Details
Learning Statement
In this course, you will learn how to apply modern validity theory and psychometric methods to appropriately develop and use scales measuring psychological attributes.
- Overview of construct validity theory and types of validity evidence
- Item writing
- Item content review and think-aloud protocol
- Interpreting item analysis
- Overview of types of factor analysis
- Interpreting exploratory factor analysis
- Interpreting reliability analysis
- Interpreting and evaluating validity evidence for scale selection and use
The course will focus on scale development and refinement with psychometric methods that can be implemented in many statistical software packages. Because this is a hands-on course, learners are encouraged to bring a laptop to class with a copy of R or SPSS installed. However, instruction will focus on demonstrating the statistical techniques and interpreting the most common outputs in various software programs. Provided materials and examples will include analysis scripts with annotated output from both SPSS and R.
Learning Outcomes
Upon completing this course, you will
- Be able to define construct validity and describe different forms of validity evidence
- Evaluate scale items for poor, confusing, or problematic wording
- Use descriptive statistics to quantitatively evaluate item properties
- Use qualitative approaches to review item content
- Compare different approaches to factor analysis
- Compare different approaches to quantifying reliability
- Interpret an exploratory factor analysis
- Interpret a reliability analysis
- Evaluate multiple sources of validity evidence to select a scale
- Evaluate multiple sources of validity evidence to develop or refine a scale
Notes
Instructor Bio
Dr. Jessica Flake is an Associate Professor of Quantitative Psychology at the University of British Columbia. She regularly teaches measurement and statistics courses as well as workshops at international conferences. Her work focuses on technical and applied aspects of psychological measurement including scale development, psychometric modelling and scale use and replicability and is published in top journals such as Nature: Human Behavior, Psychological Methods, Advances in Methods and Practices in Psychological Science, Structural Equation Modeling, Psychological Science and the Journal of Personality and Social Psychology. Dr. Flake was named an Association of Psychological Science Rising Star in 2021 and received a Society for the Improvement of Psychological Science Commendation in 2020 for her research into questionable measurement practices. In 2025, she was honoured with the Anne Anastasi award from the American Psychological Association for her early-career contributions to quantitative methods.
She also works in applied psychometrics as a technical advisory panel member for the Enrollment management Association, a non-profit that develops educational assessments, and serves as the Assistant Director for Methods at the Psychological Science Accelerator, a laboratory network that conducts large-scale studies.