The Flesch Reading Ease score

The Flesch score is (now) one of many easily obtained computer-based measures of text readability. The scores run from 0 to 100, and the higher the score, the easier the text. The original measure was created in 1943 by Rudolph Flesch to measure the readability of magazine articles (Klare, 1963). Basically, what current measures of the score do is to count the length of the words and the length of the sentences in a passage and compute these into a reading ease (RE) score (Flesch, 1948). The underlying logic is clear — the longer the sentences, and the longer the words within them, the more difficult the text will be. Scores can be grouped into the categories shown in Table 1.1.2.

Table 1.1.2 Flesch

scores and their interpretation

Flesch RE score

Reading age

Difficulty level

Example for UK readers


10-11 years

Very easy

Children's stories


11-12 years


Women's fiction


12-13 years

Fairly easy

Popular novels


14-15 years


Tabloid newspapers


16-17 years

Fairly difficult

Introductory textbooks


18-20 years


Students' essays



Very difficult

Academic articles

Adapted from Hartley, Sotto and Fox (2004), p. 193. © Sage Publications.

Adapted from Hartley, Sotto and Fox (2004), p. 193. © Sage Publications.

Academic text typically falls into the 'difficult' and the 'very difficult' categories.

There are a number of obvious limitations to this measure (along with most other computer-based measures of readability). The formula was developed in the 1940s for use with popular reading materials rather than academic text: it is thus somewhat dated and not entirely appropriate in the current context. The notion that the longer the words and the longer the sentences, then the more difficult the text, although generally true, is naive. Some short sentences are very difficult to understand. Thus the calculations do not take into account the meaning of the text to the reader (and you will get the same score if you process the text backwards), nor do they take into account the readers' prior knowledge about the topic in question, or their motivation — both essential contributions to reading difficulty.

Nonetheless, despite these limitations, the Flesch score has been widely used to assess the readability of academic text, partly because it is a convenient tool on most writers' personal computers. It is simple and easy to run and keeps a check on the difficulty level of what you are writing as you proceed. It is also useful as a measure of the relative difficulty of two or more versions of the same text — we might well agree that one version with a Flesch score of 50 is likely to be easier to read than another version with a score of 30, and that some useful information might be obtained if we use the scores to make comparisons between different texts, and between different versions of the same text.

Some examples might serve to illustrate this. My colleagues and I, for instance, once carried out four separate studies using the Flesch and other computer-based measures of text to test the idea that influential articles would in fact be more readable than would be less influential ones (Hartley et al, 2002). In the first two of these studies, we compared the readability of sections from famous articles in psychology with that of sections from the articles that immediately followed them in the same journals (and were not famous). In the second two studies, we compared the readability of highly cited articles in psychology with that of similar controls. The results showed that the famous articles were significantly easier to read than were their controls (average Flesh scores of 33 versus 25), but that this did not occur for the highly cited articles (average Flesch scores of 26 and 25).

In another study, we compared the readability of texts in the sciences, the arts and the social sciences, written in various genres (Hartley et al, 2004). Here, we compared extracts in all three disciplines from sets of research articles, text-books for colleagues, text-books for students, specialist magazine articles and magazine articles for the general public. The main finding here was not surprising — the texts got easier to read as measured by the Flesch scores as they moved across the genres, from 15 to 60. There was little support, however, for our notion that the scientific texts would be easier to read than those in the other disciplines within each of the different genres.

In a third example, we used Flesch scores, together with data from other computer-based measures, to examine the relative readability of the abstracts, introductions, and discussions from eighty academic papers in psychology (Hartley et al, 2003). Here the abstracts scored lowest in terms of readability (mean score of 18), the introductions came next (mean score of 21), and the discussions did best of all (mean score 23). Intriguingly, although the mean scores of the different sections differed, the authors wrote in stylistically consistent ways across the sections. Thus, readability was variable across the sections, but consistent within the authors.

0 0

Post a comment