|Name of test||British Ability Scales – Third Edition|
|Previous version(s)||BAS, BASII/BAS2,DAS, DASII|
Assess children's current intellectual functioning. The British Ability Scales (BAS) has long been established as a leading standardised battery in the UK for assessing a child’s cognitive ability and educational achievement across a wide age range.
15 tests measuring particular types of knowledge and/or skills, assessing aspects of intellectual functioning and basic academic skills. Verbal skills: word definitions and verbal similarities. Non-verbal reasoning: matrices and quantitative reasoning. Spatial skills: pattern construction and recognition of designs. Achievement: word reading, spelling and number skills. Diagnostic: recall of objects: immediate; recall of objects: delayed; recall of digits forward; recall of digits backward; speed of information processing; recognition of pictures.
|Authors||Colin D. Elliot & Pauline Smith|
|Age range||3–17;11 years|
|Key Stage(s) applicable to||KS1, KS2, KS3, KS4, KS5|
|UK standardisation sample||Yes|
|Validity measures available?||Yes|
|Reliability measures available?||Yes|
|Note whether shortlisted, and reasons why not if relevant||Shortlisted|
|Additional information about what the test measures||
Subtests measure literacy and maths.
|Are additional versions available?||
This is the third edition of the BAS. Within the test there are separate record forms and subtests for early years and school age. We focus on school age here as early years does not fit our selection criteria. There are two parallel reading tests.
|Can subtests be administered in isolation?||
Yes (in particular, the literacy and maths tests can be used without the ability tests; however, it is advised that all six core tests are used if ability testing).
|Administration group size||
Full battery: 90 minutes. Attainment tests only: 20 minutes.
|Description of materials needed to administer test||
Test kit—administration manual, test booklets, response forms, stopwatch, pencil and paper.
|Any special testing conditions?||
Tests should be administered in the sequence suggested.
Oral or paper and pencil
|What device is required||
|Progress through questions||
|Is any prior knowledge/training/profession accreditation required for administration?||
|Is administration scripted?||Yes|
|Description of materials needed to score test||
Administration and scoring manual and scoring folder or SRS (online scoring).
|Types and range of available scores||
Raw, t -scores (20–80), standard scores (39–160), percentiles, age equivalents (5;00 to 18;00).
|Score transformation for standard score||
|Age bands used for norming||
2 months till age 8, then 6 months
Complex manual scoring (training required) or computer scoring with manual entry of responses from a paper form using an online service.
|Does it adequately measure literacy, mathematics or science?|
|Does it reflect the multidimensionality of the subject?||
Specific literacy (word reading, spelling) and generic mathematics
|Construct validity comments (and reference for source)||
The technical manual (Elliot & Smith, 2011b) provides a good deal of information about the rationale of the assessment. Construct validity is supported by statistical evidence including factor analyses. The evidence for the assessment as a whole is excellent. However, although correlations are reported between the core scales and the YARC, they are not reported between the achievement scales and other measures.
|Does test performance adequately correlate with later, current or past performance on a criterion measure of attainment?|
|Summarise available comparisons||
None available to review.
|Is test performance reliable?|
|Summarise available comparisons||
Excellent internal consistency is reported in the technical manual for achievement scores, and good for cognitive scores (Elliot & Smith, 2011b)—split half reliability is reported for each age and each subscale, as well as overall. Overall reliabilities: spelling 0.96; number skills 0.95; word reading A 0.98; word reading B 0.97; word definitions 0.85; verbal similarities 0.88; matrices 0.84; quantitative reasoning 0.88; recognition of designs 0.76; pattern construction (std) 0.92. SEM are reported in t-score units for cognitive measures and standard score units for achievement tests: spelling 3.00; number skills 3.48; word reading A 2.54; word reading B 2.76; word definitions 3.91; verbal similarities 3.48; matrices 4.05; quantitative reasoning 3.50; recognition of designs 4.94; pattern construction (std) 2.96. Good temporal stability is also reported at 2 to 7 week test-retest intervals (Elliot & Smith, 2011b). However, note that this is based on random samples from standardisation of previous editions (BAS2 and DAS2). Inter-rater reliability was reportedly excellent for subtests that require judgement (Elliot & Smith, 2011b). Intraclass correlations were 0.99 for verbal similarities and word definitions and 0.95 for copying.
Is the norm-derived population appropriate and free from bias?
|Does the standardisation sample represent the target/general population well?||Yes|
|If any biases are noted in sampling, these will be indicated here.||
Norms are derived from a large, representative, stratified sample.
Elliott, C. D., & Smith, P. (2011a). British Ability Scales 3: Administration and scoring manual: GL Assessment.Elliott, C. D., & Smith, P. (2011b). British Ability Scales 3: Technical Manual: GL Assessment.