Test identification

Name of test Lucid Assessment System for Schools (LASS 11–15)
Version 4th Edition
Previous version(s) LASS secondary, 1st, 2nd and 3rd editions. Note that LASS 8–11 is also available for primary age.
Subjects Literacy
Summary

Computerised system for assessment of dyslexic tendencies and other learning needs. LASS 8–11 and 11–15 are multi-functional assessment systems designed to highlight differences between actual and expected literacy levels.

Assessment screening

Subscales

Single word reading, sentence reading, spelling, reasoning, mbile (auditory memory), cave (visual memory), nonwords (phonics skills), and phonological processing (segments).

Additional References Horne, J. (2002). Development and Evaluation of Computer-based Techniques for Assessing Children in Educational Settings. (PhD). University of Hull. Horne, J. (2007). Gender differences in computerised and conventional educational tests. Journal of Computer Assisted Learning, 23(1), 47-55.
Authors Horne, J., Keates, A., & Stansfield, J. 
Publisher Lucid/GL Assessment
Test source https://www.gl-assessment.co.uk/products/lucid-lass-11-15/
Guidelines available? Yes
Norm-referenced scores. Yes
Age range 11–15 years
Key Stage(s) applicable to KS3, KS4
UK standardisation sample Yes
Publication date 2010
Re-norming date N/a

Eligibility

Validity measures available? Yes
Reliability measures available? Yes
Note whether shortlisted, and reasons why not if relevant Shortlisted

Administration format

Additional information about what the test measures

Word reading, spelling, sentence reading.

Are additional versions available?

There is a parallel form of LASS for younger children (LASS 8–11). Previous editions have swapped between being called LASS Secondary and LASS 11–15.

Can subtests be administered in isolation?

Yes

Administration group size

Individual, small group

Administration duration

45 minutes (5 minutes per subtest)

Description of materials needed to administer test

Computer, headphones or speakers, keyboard and mouse.

Any special testing conditions?

No

Response format

Response mode

Electronic

What device is required

Computer or tablet

Queston format.

Mixed

Progress through questions

Adaptive for sentence reading, spelling and reasoning. Flat for word reading, nonword reading, segments and cave. Discontinue rule for mobile.

Assessor requirements

Is any prior knowledge/training/profession accreditation required for administration?

No

Is administration scripted? Yes

Scoring

Description of materials needed to score test

Computerised scoring

Types and range of available scores

Age standardised scores, stanines (1–9), percentile, T-score, z-score

Score transformation for standard score

Age standardised

Age bands used for norming

3 months

Scoring procedures

Computer scoring with direct entry by test taker.

Automatised norming

Computerised

Construct Validity

Does it adequately measure literacy, mathematics or science? Rating: 3 of 4
Does it reflect the multidimensionality of the subject?

Generic literacy (with specific subtests)

Construct validity comments (and reference for source)

The teacher’s manual (Horne et al., 2010) illustrates construct validity is through correlations with other appropriate instruments. Construct validity is excellent for sentence reading and spelling, and adequate for reasoning and the diagnostic measures. Sentence reading correlates with NFER sentence completion at r = .75. Spelling correlates with British Spelling Test Series 3 at r = .88. Reasoning correlates with Matrix analogies test r = .52. Caves correlates with Wechsler Memory Scales Spatial span r = .37. Mobile with WMS digit span r = .55. Nonwords with PhAB nonword reading r = .43. Segments with PhAB spoonerisms r = .45. Contrasted group validity is also reported in the teacher’s manual (Horne et al., 2010). A study with 30 students with dyslexia who showed lower scores on all tests except reasoning and visual memory. 17 students with other SEN showed lower scores on all tests. 79% of students with dyslexia were correctly identified.

Criterion Validity

Does test performance adequately correlate with later, current or past performance on a criterion measure of attainment? Rating: 0 of 4
Summarise available comparisons

None available to review.

Reliability

Is test performance reliable? Rating: 2 of 4
Summarise available comparisons

Internal consistency is not reported. Temporal stability is illustrated in a study with a 4 week test-retest interval which is reported in the Teacher’s Manual (Horne et al., 2010). The coefficients can be summarised as indicating good stability for literacy measures (sentence reading .85; spelling .93); adequate for nonwords and segments (nonwords .77; segments .74) and inadequate for reasoning, cave and mobile (reasoning .51; cave .53; mobile .58).

Is the norm-derived population appropriate and free from bias?

Does the standardisation sample represent the target/general population well? Yes
If any biases are noted in sampling, these will be indicated here.

Sample size is good but insufficient information is provided about sample procedures and demographics to confirm that the norming sample was free from bias.

Sources

Sources

Horne, J., Keates, A., & Stansfield, J. (2010). LASS 11–15: Lucid Assessment System for Schools for ages 11 to 15 years. Teacher’s manual. Beverley, UK: Lucid Research Ltd.