Jump to content

User:EricJN/sandbox

From Wikipedia, the free encyclopedia

Construct validity was a term coined by Paul E. Meehl and R.C. Challman in 1954 and the concept of construct validity was enhanced by Meehl and Lee Cronbach in 1955. In scientific research (including social sciences and psychometrics), construct validity refers to the validity of inferences that observations or measurement tools actually represent or measure the construct being investigated.[1] Constructs are abstractions that are deliberately constructed by researchers in order to conceptualize the latent variable, which is the cause of scores on a given measure (although it is not directly observable).[1] Concept validity is related to the theoretical ideas behind the trait under consideration, including the concepts that organize how aspects of personality, intelligence, etc. are viewed.[2] The measurement tool seeks to operationalize (translate into practice) the concept, typically measuring several observable phenomena that are expected to reflect the underlying psychological concept. Construct validity is a means of assessing how well this has been accomplished. In lay terms, construct validity answers the question: "Are we actually measuring what we think we are measuring?" A construct is not restricted to one set of observable indicators or attributes. It is common to a number of sets of indicators. Thus, "construct validity" can be evaluated by statistical methods that show whether a common factor can be shown to exist underlying several measurements using different observable indicators. This view of a construct rejects the operationist past that a construct is neither more nor less than the operations used to measure it.

Definition[edit]

Construct validity refers to “the degree to which a test measures what it claims, or purports, to be measuring.” [3] In other words it occurs “whenever a test is to be interpreted as a measure of some attribute or quality which is not operationally defined.”[4]


History[edit]

Throughout the 1940’s scientists had been trying to come up with ways to validate experiments prior to publishing them. The result of this was a myriad of different validities ( Intrinsic validity , face validity , logical validity , empirical validity , etc…). This made it difficult to tell which ones were actually the same, which ones weren’t useful at all. Until the middle of the 1950’s there were very few universally accepted methods to validate psychological experiments. The main reason for this was because no one had figured out exactly which qualities of the experiments should be looked at before publishing. Between 1950 and 1954 the APA Committee on Psychological Tests met and discussed the issues surrounding the validation of psychological experiments. [5] The main ideas that came out of these meetings were predictive validity, concurrent validity, content validity, construct validity. Since there was no official definition of construct validity at the time, there was some disagreement on what it should measure. But most researchers agreed that at a basic level, it would seek to investigate whether the study actually measured what it meant to measure. Finally the group outlined several ways to validate the construct of psychology experiments. The methods of construct validation included looking at group differences, correlation matrices, factor analysis, studies of internal structure, studies of change over occasions, and studies of process.


Evaluation[edit]

There are several approaches to evaluating construct validity. One method is the known-groups technique, which involves administering the measurement instrument to groups expected to differ due to known characteristics.[1] Hypothesized relationship testing involves logical analysis based on theory or prior research.[1] Factor analysis uses computational methods to identify and group together items measuring the same underlying attribute.[1] Another important evaluation method is the multitrait-multimethod matrix of examining construct validity, as described by Campbell and Fiske in 1959.[6] This model examines convergence (evidence that different measurement methods of a construct give similar results) and discriminability (ability to differentiate the construct from other related constructs).[1]


Construct validity is enhanced by carefully outlining out the relevant treatment, outcomes, settings, and population constructs.[1] The study participants should be good examples of underlying constructs (eg if studying "disadvantaged" people, the participants should represent the researcher's construct of "disadvantaged").[1] Construct validity is threatened by participant reactivity to the study situation (eg the Hawthorne effect), altered behaviour due to the novelty of a new treatment, researcher expectations, and diffusion or contamination of the treatment conditions.

Construct Validity in Experiments[edit]

In order to show an example of construct validity, it would be best to do so with a landmark experiment. One of which is the Milgram’s study of obedience. The purpose of this study was to look at whether or not a person would continue to do something they were uncomfortable with just because someone of authority was telling them to do so. Essentially it was intended to test whether people are obedient or not.


           This was done by getting participants through voluntary participation in the form of a newspaper advertisement.  They were all men of various ages, level of education, and occupation.  There was also and “experimenter” running the experiment and “learner” that acted in the study.  The participants were essentially to listen to the “experimenter” and shock the “learner” every time they responded incorrectly to studied information.  There were 30 different levels of shock to be administered.  The participants were allowed to hear the “learner’s” reaction to the shock.  If the participant didn’t want to continue with the shocks they were heavily encouraged to continue.  If they refused they were considered defiant before the 13th step and if they continued passed the 13th step they were considered obedient.

.


           Now let’s look at this in regards to construct validity.  Does the level at which the person decides not to continue with the shock really accurately measure a person’s level of obedience?  There are two ways looking at this idea. There is the definitionalist’s view of construct validity.  This view states it is essential to define exactly what we want to be looking for when we are testing something (7).  So in this case, is the level of shock testing for level of obedience and level of obedience only?  The other view is called the relationist’s view.  This view states that, in this case, it would be important to make sure to test for obedience since that is the intention of the study, but if other factors come into play as long as it can relate to obedience it is fine that it may be included in the testing. [7]


           It appears Milgram’s study does measure obedience very effectively, but it is evident that other factors may come into play.  So this study does seem to have construct validity.  However it is important to note that this most likely aligns to the relationist’s view.  This is because this experiment can also just be showing that some people are just oblivious to things going on around them.  They could also feel a sense of responsibility to finish as they were monetarily compensated for their time.  They were guaranteed the money no matter what but some people may have really taken that to heart.


           The level does measure level of obedience within the relationist’s view.  Construct validity is present in the Milgram’s study making it a good valid study for its testing purposes at the time it was administered.  In this day and age though, it would not be approved by an internal review board due to the possible psychological harm done to the participant. Even still, this is a landmark study and one that contains a good example to proper construct validity. [8]


Threats to Construct Validity[edit]

There are several different threats to construct validity. These threats, as defined by William Trochim, consist of “Inadequate Preoperational Explication of Constructs, Mono-Operation Bias, Mono-Method Bias, Interaction of Different Treatments, Interaction of Testing and Treatment, Restricted Generalizability Across Constructs, Confounding Constructs and Levels of Constructs, Hypothesis Guessing, Evaluation Apprehension, and Experimenter Expectancies Page text.[9]


Inadequate Preoperational Explications of Constructs[edit]

"Inadequate preoperational explications of constructs" is not defining the construct of the experiment well enough Page text.[10]


Mono-Operation Bias[edit]

"Mono-operation bias" pertains to only using one variable or only suggesting one way of dealing with a problem Page text.The problem with this is, it does not look at different aspects of the experiment and the true reasons for the experiment Page text.[11]

Mono-Method Bias[edit]

Mono-method bias" is how an experimenter measured or observed their experiment may correlate to things they did not expect Page text.[12]

Interaction of Different Treatments[edit]

"Interaction of different treatments" deals with an experiment where an experimenter may implement a plan to help a certain population, and they assume that their plan caused an effect that could not be achieved for other reasons that might also be incorporated with that population Page text.[13]


Interaction of Testing and Treatment[edit]

"Interaction of testing and treatment" involves labeling a program without addressing the issue of the treatment being a possible part of the program [14]


Restricted Generalizability Across Constructs[edit]

"Restricted generalizability across constructs" means an experiment worked in one particular situation, but it might not work in any other situation than than that one [15]


Hypothesis Guessing[edit]

In "hypothesis guessing" the participants in a experiment may try to figure out what the goal of an experiment is, and their thoughts about what they think is being studied may affect how they respond in the experiment and alter the results. [16]


Evaluation Apprehension[edit]

"Evaluation apprehension" explains how a participant’s mind and body’s react to knowing that they are being experimented on (9). This could alter your results of an experiment [17]


Experimenter Expectancies[edit]

"Experimenter expectancies" explain that the way a researcher may want or may expect an experiment to go, may be a potential effect of why that experiment went a certain way [18]



See Also[edit]

References[edit]

  1. ^ a b c d e f g h Polit DF Beck CT (2012). Nursing Research: Generating and Assessing Evidence for Nursing Practice, 9th ed. Philadelphia, USA: Wolters Klower Health, Lippincott Williams & Wilkins.
  2. ^ Pennington, Donald (2003). Essential Personality. Arnold. p. 37. ISBN 0-340-76118-0.
  3. ^ Brown, J. D. (1996). [ http://jalt.org/test/bro_8.htm “Testing in language programs.’] , Upper Saddle River, NJ: Prentice Hall Regents. Retrieved April 6th, 2013.
  4. ^ Cronbach, L. J, Meehl, P.E. (1955). “Construct Validity in Psychological Tests.”, Pschological Bulletin, 52, 281-302. Retrieved April 6, 2013.
  5. ^ Cronbach, L. J, Meehl, P.E. (1955). “Construct Validity in Psychological Tests.”, Pschological Bulletin, 52, 281-302. Retrieved April 6, 2013.
  6. ^ Campbell, D. T. (1959). "Convergent and discriminant validation by the multitrait-multimethod matrix" (56). Psychological Bulletin: 81–105. {{cite journal}}: Cite journal requires |journal= (help)
  7. ^ Trochim, William M. The Research Methods Knowledge Base, 2nd Edition. [ http://www.socialresearchmethods.net/kb/considea.php], Retrieved April 6, 2013.
  8. ^ Ksenyeh, Ed & Liu, David. (2001). Conflict, Order and Action : Readings in Sociology. p. 134 - 149. ISBN 155130192X, 9781551301921.
  9. ^ Trochim, William M. The Research Methods Knowledge Base, 2nd Edition. [ http://www.socialresearchmethods.net/kb/considea.php], Retrieved April 6, 2013.
  10. ^ Trochim, William M. The Research Methods Knowledge Base, 2nd Edition. [ http://www.socialresearchmethods.net/kb/considea.php], Retrieved April 6, 2013.
  11. ^ Trochim, William M. The Research Methods Knowledge Base, 2nd Edition. [ http://www.socialresearchmethods.net/kb/considea.php], Retrieved April 6, 2013.
  12. ^ Trochim, William M. The Research Methods Knowledge Base, 2nd Edition. [ http://www.socialresearchmethods.net/kb/considea.php], Retrieved April 6, 2013.
  13. ^ Trochim, William M. The Research Methods Knowledge Base, 2nd Edition. [ http://www.socialresearchmethods.net/kb/considea.php], Retrieved April 6, 2013.
  14. ^ Trochim, William M. The Research Methods Knowledge Base, 2nd Edition. [ http://www.socialresearchmethods.net/kb/considea.php], Retrieved April 6, 2013.
  15. ^ Trochim, William M. The Research Methods Knowledge Base, 2nd Edition. [ http://www.socialresearchmethods.net/kb/considea.php], Retrieved April 6, 2013.
  16. ^ Trochim, William M. The Research Methods Knowledge Base, 2nd Edition. [ http://www.socialresearchmethods.net/kb/considea.php], Retrieved April 6, 2013.
  17. ^ Trochim, William M. The Research Methods Knowledge Base, 2nd Edition. [ http://www.socialresearchmethods.net/kb/considea.php], Retrieved April 6, 2013.
  18. ^ Trochim, William M. The Research Methods Knowledge Base, 2nd Edition. [ http://www.socialresearchmethods.net/kb/considea.php], Retrieved April 6, 2013.

[1] [2] [3] [4] [5]

  1. ^ Brown, J. D. (1996). [ http://jalt.org/test/bro_8.htm “Testing in language programs.’] , Upper Saddle River, NJ: Prentice Hall Regents. Retrieved April 6th, 2013.
  2. ^ Cronbach, L. J, Meehl, P.E. (1955). “Construct Validity in Psychological Tests.”, Pschological Bulletin, 52, 281-302. Retrieved April 6, 2013.
  3. ^ Trochim, William M. The Research Methods Knowledge Base, 2nd Edition. [ http://www.socialresearchmethods.net/kb/considea.php], Retrieved April 6, 2013.
  4. ^ ^ Ksenyeh, Ed & Liu, David. (2001). Conflict, Order and Action : Readings in Sociology. p. 134 - 149. ISBN 155130192X, 9781551301921.
  5. ^ Trochim, William M. The Research Methods Knowledge Base, 2nd Edition. Internet WWW page, at URL: [1].