Test identification

Name of test New Group Reading Test
Version NGRT
Previous version(s) GRT, GRTII
Subjects Literacy
Summary Assess students’ reading strengths and weaknesses

Assessment screening

Subscales Single test comprised of several parts - Phonics,Sentence completion,Passage comprehension
Additional References n/a
Authors Burge, B., Styles, B., Brzyska, B., Cooper, L., Shamsan, Y., Saltini, F., & Twist, L.
Publisher GL Assessment
Test source https://www.gl-assessment.co.uk/products/new-group-reading-test-ngrt/
Guidelines available? Yes
Norm-referenced scores. Yes
Age range 6-16 Years
Key Stage(s) applicable to KS1, KS2, KS3, KS4
UK standardisation sample Yes
Publication date 2010
Re-norming date n/a


Validity measures available? Yes
Reliability measures available? Yes
Reason for exclusion from shortlist shortlisted

Evaluation and Appraisal

Are additional versions available? Digital and paper versions.Digital version is adaptive, and comprised of 3 equivalent forms (A, B, C for use at different times of year) which may be administered to students across the age range 7-16 years (not recommended for use in Year 1).Paper version published 2010, comprised of 4 levels of tests with equivalent forms for students aged 6-16 years. 7 alternate forms to cover age range:Test 1 standard scores 5;00-7;05Tests 2A/2B standard scores 6;00-10;05Tests 3A/3B standard scores 9;00-14;05Tests 4A/4B standard scores 13;00-17;05
Can subtests be administered in isolation? No.The digital test is adaptive - students begin with sentence completion and then move to passage comprehension or phonics dependent on performance. If given phonics section, students will not proceed to passage comprehension.
Administration group size Whole class
Administration duration Untimed - variable between students, Digital version - approximately 30 minutes to complete the test. Recommends allowing 40 minutes including introduction and administration.Paper version - all but the weakest readers should be able to complete in 45-50 minutes
Description of materials needed to administer test Digital administration requires a testwise account.Computer administration: Each student requires a computer, mouse, headphones;Tablet administration: Each student requires a tablet, headphones.Paper administration requires a copy of the teacher's guide and 1 response booklet for each student (note that test level is determined by age).
Any special testing conditions? Formal test environment, quiet room without interruptions, invigilated.

Response format

Response mode Electronic OR Paper and pencil
What device is required Computer/tablet
Queston format. multiple choice
Progress through questions Digital version: adaptive.Paper version: flat.

Assessor requirements

Is any prior knowledge/training/profession accreditation required for administration? Not stated (no)
Is administration scripted? Yes


Description of materials needed to score test Digital administration - scoring is automatic in testwise.OR Paper administration requires teacher's guide or bureau service.
Types and range of available scores Standard age scores (SAS, 60-140), Stanine scores (overall, and separately for sentence completion and passage comprehension, 1-9), National percentile ranks (0-100),Group rank (dependent on group size),Reading ages (age equivalent score, digital available for <6;11->14:00; paper Age equivalent score (test 1 <5;03->8;00; test 2A/B <5;08->11;00; test 3A/3B <6;07->15;00; test 4A/4B <7;09-16;08),NC reading level (based on teacher assessment collected when test developed).Phonics raw scores are broken down by items for initial letters, sounds like, final letter sounds, initial letter sounds and overall score.Passage comprehension responses are broken down by question type, however note that the adaptive nature of the test and variable question types used for different passages means that these are not necessarily comparable between children.
Score transformation for standard score Age standardised
Age bands used for norming 2 months
Scoring procedures computer scoring with direct entry by test takerORbureau service (scored by publisher/distributor)
Automatised norming computerised/online

Construct Validity

Does it adequately measure literacy, mathematics or science?
Does it reflect the multidimensionality of the subject? Generic literacy
Construct validity comments (and reference for source) From the teacher's Guide (Paper version) the test appears to have face validity as a good deal of detail of information is presented about test development, alignment to the national curriculum and use of multiple trials prior to standardisation. However, that alignment predates the most recent National Curriculum and it is unclear what underlying construct the tests actually aim to measure (lack of theoretical explanation), and therefore it is very difficult to assess construct validity. The use of multiple choice response methods is a threat to validity.The manual states that item response theory methodology was used to select items during test development and standardisation. However, no statistical measures of construct validity are provided.

Criterion Validity

Does test performance adequately correlate with later, current or past performance?
Summarise available comparisons Evidence of criterion validity is available from two sources. The technical information (GL Assessment 2018) indicates that Information on Teacher Assessment (TA) levels in Reading for England was collected when the Equating study was conducted in 2012; the correlation between TA levels and the reading ability score was 0.8. Results were collected from 7,275 Year 6 students who took NGRT and the Key Stage 2 SATS in England in 2016/17. The correlation between NGRT SAS scores and the KS2 Scaled scores in Reading was 0.75 and for the Grammar, Punctuation and Spelling it was 0.72.


Is test performance reliable?
Summarise available comparisons It is not clear whether different norms are used for paper and digital versions of the tests, or that any equating studies have been conducted between the digital and paper versions. Given that the digital version is adaptive and the paper version is flat this is problematic.The Technical Information (GL Assessment, 2018) reports that Cronbach's Alpha is excellent .9 and SEM 4.7. Temporal stability and equivalence reliability have not been assessed separately, however, a study of around 59,000 students who took different versions of the NGRT tests on average 6 months apart found the correlation was 0.83. The correlation for students who took the tests on average one year apart was 0.82 based on around 44,000 students. Again though, it is not clear whether this is on the digital or paper version of the test and therefore reliability is unclear.

Is the norm-derived population appropriate and free from bias?

Is population appropriate and free from bias? No
If any biases are noted in sampling, these will be indicated here. The digital version uses the same norms as the paper version. However, the digital version is adaptive, and has 3 versions compared to 11 versions for the paper version. An equating study has been conducted but very little information is provided. It is unclear whether the equating study took place before the additional 4 paper forms were standardised.Norming on the original paper version took place in February 2010 with a large, stratified sample.


Sources GL Assessment (2017). NGRT Digital: Guidance and information for teachers. 4th Edition. GL Assessment. Retrieved from https://www.gl-assessment.co.uk/media/234004/ngrt-digital-guidance_0118b.pdfGL Assessment (2018). Technical Information: New Group Reading Test (NGRT) Digital Edition. GL Assessment. Retrieved from https://www.gl-assessment.co.uk/media/294737/ngrt-technical-final-proof.pdfBurge, B., Styles, B., Brzyska, B., Cooper, L., Shamsan, Y., Saltini, F., & Twist, L. (2010). NGRT: New Group Reading Test. Teacher’s Guide (Paper version). London, UK: GL Assessment Ltd.GL Assessment (2020). Data Guide. GL Assessment. Retrieved from https://www.gl-assessment.co.uk/media/352420/ngrt-data-guide.pdf