# Validity
Review Date: 2022-04-01
#on/research | #on/science | #on/validity
## What is the thesis from Messick
Validation is a scientific process. Validity is not an all or none. Validity is a graduated and faceted assessment of the evidence with an evaluative judgment. It is an evolving process that changes as context, society, and evidence changes. Validity is a combination of empirical and theoretical evidence of the degree to which inferences may be made based on (test scores) data. Validity is a unified concept. Validity as a concept is fragile and dependent on the number of measurements. A single test item is more fragile than a whole test which is more fragile than a sequence of tests which is less convincing than a career. In the end, validity is about drawing _inferences_ between an assessment and a real world outcome.
Since validity is an argument, it is an argument _with_ someone in a context. So the audience has to be kept in mind during this argument. Is it validity for formative assessments by nurses?
## Types of Validity
### Content
Domain definite, domain relevance, domain representation, and appropriate test construction procedures. _Tests should represent the domain, and should not contain material outside the intended domain._ The validity is "the credibility, the soundness, of the assessment instrument itself fore measuring the construct of interest."
![[Response Process Validity]]
### Internal Structure
The statistical relations between different measures. For example, what is the correlation between people using the same instrument to perform a measurement? What is the correlation within one person's measurements?
### Relation to other variables
### Consequences
- Park et al Rater Reliability paper (Advances in Health Sciences Education): provides a template for response process and internal structure validity evidence
- Lockspeiser et al Validity paper (Academic Medicine): provides a template for all five sources of validity evidence: content, response process, internal structure, relations to other variables, and consequences
- I think our paper will have some combination of elements from above.
- As discussed our paper will incorporate the following sources of validity evidence:
- Content: Delphi and instrument development
- Response process: (1) rater accuracy, (2) rater consistency, (3) qualitative comments from prior (and current) development phases, and (4) impact of rater training
- Internal structure: instrument statistics (internal-consistency reliability), discrimination, measurement error
- Consequences: Improvement in learner/participant skills by serving as assessors – will decide whether to include on first paper, depending on data/results
Please let me know the final number of assessors participating. I will use this to design the rater-assignment matrix for our next meeting.
## Am I convinced and why?
## Summarize the argument
## What is the other side of the argument?
## What else do I wonder about?
## Action
## When do I want to stumble across this?
## Sources:
Sireci, S. G. (1998). THE CONSTRUCT OF CONTENT VALIDITY. _Social Indicators Research_, _45_(1), 83–117.
Messick, S. (1987). VALIDITY. _ETS Research Report Series_, _1987_(2), i–208. [https://doi.org/10.1002/j.2330-8516.1987.tb00244.x](https://doi.org/10.1002/j.2330-8516.1987.tb00244.x)
![[ETS Research Report Series - December 1987 - Messick - VALIDITY.pdf]]
[See YSPark Email](hook://email/BN8PR04MB6338499B2CFC56BB7BC3ADE794859%40BN8PR04MB6338.namprd04.prod.outlook.com)