R. Adcock and David Collier, “Measurement validity: A shared standard for qualitative and quantitative research”, APSR (2001), pp. 529-546.

Main Argument: Authors seek to formulate a methodological standard that can be applied in both qualitative and
quantitative research & offer a new account of different types of validation. Content validation makes the indispensable contribution of assessing what we call the adequacy of content of indicators. Additionally Collier & Adcock encourage scholars to distinguish between issues of measurement validity and broader conceptual disputes.
== Notes ==
Problems with Msmt Validity:
*  Measurement validity is specifically concerned with whether operationalization and the scoring of cases adequately reflect the concept the researcher seeks to measure (KKV call it “Descriptive Inference”)
*  Qualitative/quantitative researchers sometimes view each other’s measurement tools does not arise from irreconcilable methodological differences
*  relation between measurement validity and disputes about the meaning of concepts
*  contextual specificity
*  language used to discuss alternative procedures for measurement validation
Relation between reliability and validity:
(1) Validity is sometimes understood as exclusively involving bias, that is error that takes a consistent direction or form (random or systematic)
(2) Alternatively, some scholars hesitate to view scores as valid if they contain large amounts of random error
Three types of validation:
*  Content: Are key elements omitted from the indicator? Are inappropriate elements included in the indicator?
*  Problems: alone it is incomplete: First, although a necessary condition, the findings of content validation are not a sufficient condition for establishing validity & the trade-off between parsimony and completeness that arises because indicators routinely fail to capture the full content of a systematized concept
*  Convergent/discriminant [540]: Are the scores (level 4) produced by alternative indicators (level 3) of a given systematized concept (level 2) empirically associated and thus convergent? Do these indicators have a weaker association with indicators of a second, different systematized concept, thus discriminating this second group of indicators from the first?
*  Problems: scholars might think that in convergent/discriminant validation empirical findings always dictate conceptual choices & interpretation of low correlations among indicators
* Nomological/construct [542]: In a domain of research in which a given causal hypothesis is reasonably well established, we ask: Is this hypothesis again confirmed when the cases are scored (level 4) with the proposed indicator (level 3) for a systematized concept (level 2) that is one of the variables in the hypothesis?
*  Problems: circularity & presupposes the valid measurement of the other systematized concept involved in the hypothesis