This study employed a data simulation to evaluate the impact of a strategy to reduce test length by including only high-quality test questions, where quality was defined by a statistical indicator of the degree to which a question distinguishes between more and less able test takers. The impact of this strategy on the rank ordering of simulated test takers according to their total test score was evaluated, as was the predictive validity and classification accuracy of scores based on the shorter tests. Empirical data from an undergraduate admission test was used to further investigate the effect of this strategy. Results showed that reducing test length by as much as half had no serious effects on the estimate of test-taker ability, whereas shortening the test beyond that point was problematic. Furthermore, the empirical data showed that although the reliability and criterion-related validity (i.e., the relationship of the test score to the outcome of interest) may not be seriously affected by shortening the test, the reduced score range of the shorter test may be too limited to differentiate between candidates based on test scores. In addition, the shorter test may not adequately cover the required content. Finally, the use of score bands in admission testing was discussed.