Quiz: Reliability and Validity
Question 1
1 / 1 pts
Jose has developed a test that has poor reliability; he can seek to increase reliability by:
Increasing the number of test questions.
Decreasing the number of test questions.
Making the test questions more ambiguous.
Starting over and developing a new test.
Question 2
1 / 1 pts
When interviewing test takers who had taken an achievement test on three different occasions, participants reported that they had remembered some of the answers from the previous test administration; this is known as:
Practice effect
Content effect
Test-retest effect
Carryover effect
Question 3
1 / 1 pts
An administrator and the school psychologist were observing a child to assess for behavioral problems. An error may occur in reviewing what the two observers notice; this is reported as:
Content-sampling error
Time-sampling error
Interrater differences
Test-taker variables
Question 4
1 / 1 pts
You are reading about reliability of a test in the test manual and notice that the researchers report using a Spearman-Brown coefficient. You can infer that internal consistency reliability was measured using:
The Kuder-Richardson Formulas
Coefficient alpha
Split-half reliability
Test-retest
Question 5
1 / 1 pts
A researcher administers an achievement test to the same group of participants on three different occasions. In reporting the results, he describes the error that occurs from repeatedly testing the same individuals; this is called:
Content-sampling error.
Time-sampling error.
Interrater differences error.
Test-taker variables error.
Question 6
1 / 1 pts
A researcher is concerned with measuring internal consistency reliability and has decided to use the Kuder-Richardson Formulas with a Likert scale test; this is a problem because the:
Test does not have dichotomous test items.
Researcher needs a second test for comparison.
Test does not measure internal consistency reliability.
Researcher is concerned with content sampling error.
Question 7
1 / 1 pts
You are attempting to account for a time sampling error and decide to administer the test a second time. In discussing reliability, you report this as what method of estimating reliability?
Alternate forms
Test-retest
Split-half reliability
Internal consistency reliability
Question 8
1 / 1 pts
If the reliability coefficient of a test is determined to be .27, what percentage is attributed to random chance or error?
27%
73%
2.7%
Unknown percentage
Question 9
1 / 1 pts
The SEM for an achievement test is 2.45. Johnny scores 100 and we assume that 68% of the time his true score falls between + 1 SEM; this means the confidence interval would be between:
0 and 102.45
2.45 and 100
95.10 and 104.90
97.55 and 102.45
Question 10
1 / 1 pts
A researcher wants to measure content-sampling error with a Likert scale test. Which of the following methods would be best?
Test-retest
Coefficient Alpha
Kuder-Richardson Formulas
Interrater differences
Question 11
1 / 1 pts
In terms of accurate prediction of a criterion variable, a person who is predicted to do well during the first semester of college (based on an SAT score) and then does poorly would fall into the _______________ quadrant.
True positive
True negative
False positive
False negative
Question 12
1 / 1 pts
The tripartite view of validity includes content validity, criterion validity, and:
Discriminate validity
Convergent validity
Content validity
Construct validity
Question 13
1 / 1 pts
Comparing pre and post-test scores of two groups – one group that experienced an intervention and one group that did not – is an example of:
Factor analysis.
Contrasted group studies.
Experimental results.
Age differentiation studies.
Question 14
1 / 1 pts
To evaluate a content validity evidence, test developers may use:
Expert judges
Factor analysis
Experimental results
Evidence of homogeneity
Question 15
1 / 1 pts
_______________ is calculated by correlating test scores with the scores of tests or measures that assess the same construct.
Convergent validity
Discriminant validity
Face validity
Content validity
Question 16
1 / 1 pts
When discussing the relationship between reliability and validity, which of the following is true?
High reliability always indicates low degree of validity.
High reliability always indicates high degree of validity.
Low reliability always indicates high degree of validity.
Low reliability always indicates low degree of validity.
Question 17
1 / 1 pts
_______________ are concepts, ideas, or hypotheses that cannot be directly measured or observed.
Constructs
Variables
Standards
Specifications
Question 18
1 / 1 pts
The _______________ is characterized by assessing both convergent and discriminant validity evidence and displaying data on a table of correlations.
Multitrait-multimethod matrix
Contrasted group study
Age differentiation study
Factor matrix
Question 19
1 / 1 pts
The goal of factor analysis is to:
Measure the effectiveness of specific interventions in research.
Reveal how scores differ from one group to the next.
Prove the age of the individuals taking the test impacts their scores.
Decrease the number of variables into fewer, more general variables.