Reliable, valid, or both?
by
Reliability, as it pertains to assessment, is a measure of consistency. For example, if a group of people took a test on two different occasions, they should get nearly the same scores both times, assuming that no memory of the items carries over to the second. If an examinee scored high the first time and low the second, we wouldn’t have any basis for interpreting the test’s results. Initially, the most common means of determining reliability was to have the examinee take the same test twice or to take alternate forms of a test. The scores of the two administrations would then be correlated. Generally, one would hope for a correlation between the two administrations to reach .85 to the maximum correlation of +1.00. Reliability is the essential condition of a test: if it’s not reliable, it has to be disregarded.
That being said, a test can be reliable without being valid. A central component in early childhood screening test validity is how accurately the test identifies children who may be in need of service. However, no matter how careful examiners are, there will be some error in the decision-making process. Some children identified as OK, may actually be in the Potentially Delayed range and vice versa. Verifying the validity of the tests you use is paramount in identifying kids who are in need of extra support.
With ESI-3, you don’t have to choose between valid and reliable!
If you need to take a closer look at overall abilities to determine where additional support may be necessary, the Early Screening Inventory, Third Edition will give you the tools you need to individually screen kids ages 3:0–5:11 in several areas of development.
Read the previous articles in this series.
For more information on developmental screening with the ESI-3, visit