Test identification

Name of test Lucid Assessment System for Schools Junior (LASS 8–11), 5th Edition
Version 5th Edition
Previous version(s) 1st, 2nd, 3rd, and 4th Editions. Note that LASS 11–15 is also available for secondary age.
Subjects Literacy
Summary

Computerised system for assessment of dyslexic tendencies and other learning needs. LASS 8–11 and 11–15 are multi-functional assessment systems designed to highlight differences between actual and expected literacy levels.

Assessment screening

Subscales

Sentence reading, single word reading, spelling, non-verbal reasoning, verbal reasoning, sea creatures, mobile phone, funny words, word chopping.

Additional References Horne, J. (2002). Development and Evaluation of Computer-based Techniques for Assessing Children in Educational Settings. (PhD). University of Hull. Horne, J. (2007). Gender differences in computerised and conventional educational tests. Journal of Computer Assisted Learning, 23(1), 47–55.
Authors Joanna Horne
Publisher Lucid/GL Assessment
Test source https://www.gl-assessment.co.uk/support/lucid-lass-product-support/
Guidelines available? Yes
Norm-referenced scores. Yes
Age range 8–11 years
Key Stage(s) applicable to KS2
UK standardisation sample Yes
Publication date 2020
Re-norming date N/a

Eligibility

Validity measures available? Yes
Reliability measures available? Yes
Note whether shortlisted, and reasons why not if relevant Shortlisted

Administration format

Additional information about what the test measures

Word reading, sentence reading, spelling.

Are additional versions available?

There is a parallel form of LASS for older children (LASS 11–15).

Can subtests be administered in isolation?

Yes

Administration group size

Individual, small group

Administration duration

45 minutes (5 minutes per subtest)

Description of materials needed to administer test

Computer, headphones or speakers, keyboard and mouse.

Any special testing conditions?

No

Response format

Response mode

Electronic

What device is required

Computer or tablet

Queston format.

Mixed

Progress through questions

Adaptive for all the literacy tasks. Progressive for two of the diagnostic tasks (sea creatures and mobile phone).

Assessor requirements

Is any prior knowledge/training/profession accreditation required for administration?

No

Is administration scripted? Yes

Scoring

Description of materials needed to score test

Computerised scoring

Types and range of available scores

Standard age scores, stanines, percentile, T-score, z-score.

Score transformation for standard score

Age standardised

Age bands used for norming

3 months

Scoring procedures

Computer scoring with direct entry by test taker.

Automatised norming

Computerised

Construct Validity

Does it adequately measure literacy, mathematics or science? Rating: 3 of 4
Does it reflect the multidimensionality of the subject?

Generic literacy (with specific subtests)

Construct validity comments (and reference for source)

The user manual (Horne, 2020) illustrates construct validity with reference to studies that indicate contrasted groups validity and correlations with other tests. Contrasted group validity was illustrated by comparing dyslexic and non-dyslexic students (90 dyslexic students and 2500 non-dyslexic). The effect size of the group difference was large for sentence reading and spelling, moderate for phonological and memory tasks, small for verbal reasoning and word reading, and non-existent for nonverbal reasoning. Another study reported in the user manual (sample size approx. 444) indicated construct validity by comparing test performance with scores on the Suffolk Reading Scale. The literacy scales correlate well: sentence reading = .732, spelling = .655, word reading = .569. Non-literacy tasks show lower correlations, as would be predicted, and verbal, phonological tasks have intermediate correlations (.57, .54, .519). Hence this also illustrates discriminant validity.

Criterion Validity

Does test performance adequately correlate with later, current or past performance on a criterion measure of attainment? Rating: 0 of 4
Summarise available comparisons

None available to review.

Reliability

Is test performance reliable? Rating: 3 of 4
Summarise available comparisons

Internal consistency is reported in the user manual (Horne, 2020), with standardised alpha and Cronbach's alpha given for each subtest. This indicates substantial variability across subtests, but reliability on the literacy tasks is excellent. Sentence reading (0.893, 0.982); single word reading (0.892, 0.963); spelling (0.906, 0.983); nonverbal reasoning (0.832, 0.963); verbal reasoning (.774, .978); mobile phone (.629, .831); sea creatures (.749, .739); funny words (.805, .953); word chopping (.813, .959). Temporal stability is reported with a test-retest interval of 4–6 weeks. Literacy measures show adequate to good stability, other measures show adequate reliability; Pearsons r sentence reading .78; single word reading .74; spelling .80; nonverbal reasoning .68; verbal reasoning .63; mobile phone .57; sea creatures .50; funny words .59; word chopping .62.

Is the norm-derived population appropriate and free from bias?

Does the standardisation sample represent the target/general population well? No
If any biases are noted in sampling, these will be indicated here.

The sample size of the norming sample is excellent, and selected through clustered probability sampling. However, all data was collected between May and July, i.e. at the end of a school year.

Sources

Sources

Horne, J. (2020). User Manual. LASS for ages 8 to 11 years. London, UK: GL Assessment.