Skip to main content

The interface between unitary hypothesis and componential approach to testing reading skills: do subjects show similar levels of performance with respect to specific reading sub-skills in tests representing both theories? Descriptive correlational study

Abstract

In order to have a better understanding of the nature of reading behavior, it is quite essential to scrutinize the underlying assumptions of the two competing theories: the componential approach and unitary hypothesis and see how this is reflected on testing the reading skills. As a result, the current study was aimed at broadening the existing knowledge horizon by trying to identify if there was relationship in the overall scores of learners at different ability groups in Reading Sub-skills Test and reading levels in Cloze Test and showing how they differed in their performance with respect to specific reading sub-skills. So as to address the specific research objectives, tests were used to collect the required data. More specifically, these included: a cloze test to determine the subjects’ reading levels; and a reading sub-skills test to find out their performance in terms of the sub-skills at word, sentence, and discourse level. Descriptive statistics (i.e. frequencies and percentages) and correlations were used to analyze the data collected. Besides, the SPSS was used to carry out the computations and normality tests which showed that the test scores were not normally distributed. Hence, Kendall’s tau, a non-parametric test, was used to carry out correlation coefficients. In addition, the data analysis revealed that average-scorers, low-scorers, test-takers at instructional reading level, and those at frustration level had much difficulty with most of the reading sub-skills with varying degrees. Eventually, recommendations were suggested so that learners could improve their difficulties with the reading sub-skills.

Background of the study

The controversy over reading behavior as consisting of several specific skills components on one hand and its being taken as undifferentiated and unitary process on the other was still an unresolved issue (Weir & Porter, 1994). In addition, Weir and Porter argued that polarizing one’s stand in favor of one or the other approach might result in incorrect assessment of students’ general reading ability. In this connection, adherence to one or the other view of reading might have a direct effect on teachers’ preference of assessment methods to evaluate their respective students’ reading abilities. Consequently, teachers need to familiarize themselves with the claims made by these rival theories in order to fully understand the nature of reading behavior.

Proponents of the componential approach claim that the reading behaviour consists of several specific components (Davis, 1968; Heaton, 1988; Hughes, 1989; Munby, 1978). Similarly, other researchers used multiple regressions and thus managed to successfully identify the existence of separate reading sub-skills (Davey, 1988; Drum et al., 1981; Pollitt et al., 1985). Besides, researches conducted in the 1980s and 1990s attempted to validate the importance of a number of strategies and endorsed their acceptance with varying degrees (Duke & Pearson, 2002; Pearson et al., 1992). Even though these researchers expressed agreement on the divisible nature of the reading skills, they failed to come to terms with each other on the exact number of sub-skills that presumably constituted the macro-skill. Even so these components were found to be important as they served as a framework for writing textbooks, constructing tests, and designing courses (Grabe, 1991; Liu, 2010; Lumley, 1993).

In spite of its utility, this divisibility view was challenged by some other group of researchers who claimed that the componential approach lacked empirical support to verify the existence of separate skill components. In this connection, Alderson (1990a, b) attempted to determine the relationship between skill components and reading test items. In order to achieve this, he took a team of experts to judge which test items tested which skill components. The result of his study suggested that the judges found it difficult to agree on a specific skill component being operationalized in a particular test item. Similarly, other researchers (Alderson & Lukmani, 1989; Carver, 1992; Lunzer et al., 1979; Rosenshine, 1980; Rost, 1993) could not empirically verify the separate functioning of the specific skill components and their operationalizations in test items. Hence, this situation cast doubt on the viability of divisibility theory.

As a reaction to the lack of empirical support to sustain the componential approach, Oller (1979) proposed his influential holistic view of language, which he termed as “Unitary Competence Hypothesis (UCH)”. Oller in his study rigorously computed the scores from a wide variety of language tests and reported to have discovered the ‘g-factor’ which presumably accounted for the unitary nature of general language proficiency. In other words, Oller contended that even if language performance is thought to have been composed of specific skill components, it draws on the same set of source. Similarly, researchers like Lunzer et al. (1979), Rosenshine (1980), and Rost (1993), using factor analysis, came to the findings that the skill components load on the same factor even though they appear to be putatively different. That is to say, even if they happen to be seemingly different, the skill components statistically function in a very similar way. As a result, this may suggest that reading is believed to be undifferentiated single ability.

So far, we may realize that these two rival theories seem to be mutually exclusive. Hence, this difference is reflected on their approaches to teaching and assessing the reading skills. Specifically, the componential approach used the specific skills in teaching and assessing students’ reading comprehension (Weir & Porter, 1994) whereas the unitary approach used the integrative or holistic approach in teaching and assessment (Oller, 1979).

Test constructors who identified themselves with the componential approach set comprehension test items in order to measure students’ reading ability. Based on the nature of comprehension questions, Liu (2010) reported that researchers seemed to come up with three levels of reading comprehension: literal, referential/ interpretative, and critical level. According to his description, literal comprehension level refers to students’ understanding of plainly-stated meanings. On the other hand, students should be able to work out the relationship between sentences and the underlying meanings of sentences at the inferential level in order to fully understand the given reading text. In this case, students need to make inferences and conclusions. The critical level requires learners to apply the highest level of cognitive operations. In this case, learners are expected to have attained the ability to evaluate what they have read against their background knowledge.

There appears to be a direct relationship between reading test scores and reading comprehension levels. In this connection, Fisher (2005) contends that students who have attained literal level of comprehension might be in great difficulty to work out the underlying meanings of sentences and demonstrate evaluative ability of a given reading text. Conversely, learners who have already developed evaluative ability might not have difficulty with literal and inferential question items. Hence, learners’ test scores indicate the level at which they can perform well in test-taking situations.

On the other hand, test writers who ardently took the unitary view used integrative tests such as cloze tests so as to measure students’ reading ability. Although Taylor (1956) was credited for developing the cloze procedure, Rankin and Culhane (1969) were the ones who worked out reading levels by computing the scores from multiple-choice comprehension test items and cloze test items. The levels of reading, thus, identified were: independent, instructional, and frustration levels. As the name indicates, independent reading level might suggest that students can understand a specific reading text by themselves. However, students at instructional reading level might require some assistance from the teacher so as to understand. In spite of getting assistance from teacher, students at frustration reading level might be in great difficulty to understand the given reading text. Thus, learners’ test scores in cloze tests might indicate their reading levels in test-taking situations.

Despite the persistence of the theoretical controversy over decades, researchers’ interest to pursue studies in view of these rival theories seemed to have decreased since the late 1990s. Consequently, the research trend seemed to take a different direction. Instead of focusing on the rival theories, most of the studies were dedicated to study how students performed with respect to specific reading sub-skills. For example, Morley (2009) and Baker and Ellece (2011) studied lexical cohesion in rhetorical structures and discourse analysis while Khaleel (2010) focused on presupposition triggers. At the same time, Davoudi (2005), Preszler (2006), and Warnidah and Suwarno (2016) seemed to take interest in inference making skills and difficulties associated with them. Similarly, Jitendra et al. (2001) studied main idea strategy instruction whereas Rapp et al. (2007) made their focus on comprehension processing skills of struggling readers. Meyer et al. (1980) and Kendeou and van den Broek (2007), on their part, studied text structure to realize reading comprehension and its effect on comprehension process. Furthermore, Kim and Piper (2019) studied the structural relations between word reading skills, text reading fluency, and reading comprehension; Hessamy and Sadeghi (2013) studied the hierarchical relationship between reading sub-skills, specifically their difficulty level and contribution to reading comprehension; and Kim and Jang (2009) studied how reading sub-skills differentially functioned in a standardized reading test (OSSLT) for L1 and L2 students. On the other hand, other researchers made their focus on the theories of reading. For instance, a number of studies (Geva & Farniaa, 2012; Gottardo & Mueller, 2009; Kirby & Savage, 2008; Proctor et al., 2005) attempted to validate the SVR (Simple View of Reading) that accounted for reading comprehension in a very simplistic way. Lie et al. (2020), on their part, studied the contributions of the three domains (i.e. cognitive, psychological, and ecological) in (CMR) Componential Model of Reading to reading comprehension.

Despite researchers’ lack of interest in the theoretical dispute, we may still question the assumption that the two theoretical positions are mutually exclusive. Couldn’t there be some relationship or overlap between the two rival theoretical positions as long as they are interested in measuring the reading behavior? Couldn’t there be linear relationships between reading levels and ability groups? How do learners belonging to different reading levels and ability groups fare in light of specific reading sub-skills?

The purpose of the study

General objective

The study is aimed at identifying if there is a relationship between subjects belonging to different reading levels and ability groups with respect to their performance in reading sub-skills.

Research questions

  1. 1.

    How do subjects belonging to different ability groups (high-scorers, average-scorers, and low-scorers) in Reading Sub-skills Test perform with respect to reading sub-skills requiring operations at discourse, sentence, and word level?

  2. 2.

    How do subjects belonging to different reading levels (independent, instructional, and frustration) in Cloze Test perform with respect to reading sub-skills requiring operations at discourse, sentence, and word level?

  3. 3.

    Is there a relationship between the performance of subjects at different reading levels (independent, instructional, and frustration) in Cloze Test and those at different ability groups (high-scorers, average-scorers, and low-scorers) in Reading Sub-skills Test?

Methodology

Aims, design, and setting of the study

As explained earlier, the aim of the study is to identify the relationship between subjects belonging to different reading levels and ability groups with respect to their performance in reading sub-skills. The study follows the descriptive correlational design so as to meet the specific research objectives. The setting of the study is located at three government universities in southern part of Ethiopia, East Africa. The specific regions include: Southern Nations Nationalities and People’s Region and the newly formed Sidama Region.

Population, sample, and sampling methods

The population of the study consisted of 3rd year undergraduate students who were attending their education at three universities in southern Ethiopia. The institutions included: Arba Minch University, Jinka University, and Hawassa University. Convenience sampling was used to select the institutions while availability sampling method was used to select the study subjects. Regarding the students’ field of study, 37 were HO students while 76 were English language majors. Hence, the total number of participants was 113. The rationale for selecting English Majors was that they had already taken skills courses which might enable them to gain familiarity with a repertoire of reading strategies so as to deal with different reading texts. Furthermore, it was believed that they had developed a reasonable proficiency to be able to extract information from written texts. Besides, the reason for including health students was to compensate for the lack of high-scoring students in English language majors.

As tests were the only means of collecting data, the participants had to sit for consecutive tests at different occasions. Prior to data collection, the researcher tried to obtain the consent of the heads of the institutions and willingness of the participants to involve in the study by explaining the purpose of giving the tests and the confidentiality of the test scores. Of 113 participants, those who completely took the tests were only 92 (i.e. 33 HO students and 59 English language majors). Thus, 21 test papers were avoided because the subjects had shown lack of interest to complete the test items. As Rankin and Culhane (1969) scale for multiple-choice comprehension test scores was much elevated for Ethiopian students, the researchers used the cut-off points taken from Wendiyfraw et al. (2016) in order to determine the proportion of different ability groups (i.e. high-scorers, average-scorers, and low-scorers). Hence, the cut-off points for Reading Sub-skills Test were high-scorers (70–79), average-scorers (51–68), and low-scorers (40–49). However, Rankin and Culhane’s (1969) scale (> 60, 40–60, < 40) for reading levels was used to determine the reading levels in the Cloze Test. Hence, test-takers’ scores were organized as follows: independent reading level (62–90), instructional reading level (40–58), and frustration reading level (2–38). In short, the total number of students whose scores computed for sub-skills performance and inter-group correlations was 59/60: Reading Sub-skills Test, 59; and Cloze Test, 60 whereas 92 participants’ scores were used for inter-test correlations.

Methods of data collection

Two tests were used to collect the required data in order to answer the research questions. These included: Cloze Test and Reading Sub-skills Test. The Cloze Test was prepared from a text entitled as “Guta Plays Detective” whereas the Reading Sub-skills Test was constructed from four reading texts: Happiness, Urbanization, Use of Drugs, and Compassion. The tests, thus constructed, were subject to validation processes so as to make them more efficient. More specifically, teachers took part in FGD to match sub-skills with test items and categorize the sub-skills under different levels.

Regarding the utility of the tests with respect to the research questions, the cloze-test might help in sorting out the subjects into different reading levels whereas the reading sub-skills test would help to show how the subjects performed with respect to the sub-skills at different levels (i.e. at word, sentence, and discourse). Eventually, the following scale was used in order to interpret subjects’ performance with respect to specific sub-skills at word, sentence, and discourse level: > 90, little difficulty; 70–89, less difficulty; 50–69, some difficulty; 20–49, much difficulty; and < 20, a great deal of difficulty.

Methods of data analysis

Since tests were the only means of collecting data, the methods for analysis might consequently be quantitative. Hence, descriptive statistics such as frequency counts and percentages were used to show how the subjects performed with respect to each reading sub-skill at word, sentence, and discourse level. In addition, correlations coefficients were used to determine the relationships between reading levels and different ability groups in both tests (Reading Sub-skills Test, and Cloze Test). Prior to executing the computations, normality test for the tests were done and it showed that the data were not normally distributed as the scatter plots revealed non-linearity especially at both ends. Consequently, one of non-parametric tests, Kendall’s tau which assumes outliers and non-normal distribution, was used to work out the relationship between the tests on one hand and between reading levels and different ability groups on the other. In this connection, the SPSS software was utilized to run the normality test and correlation coefficients, and to work out frequencies and percentages.

The test validation process

Before the main study, the researcher made travels to the study sites to get the tests validated. The validation process was aimed at collecting data on (1) which test items tested which sub-skills; (2) and whether a particular sub-skill was operationalized at word, sentence, or discourse level. In order to get the data, a list of sub-skills and test items were given to respondents to match them. Then, the respondents were made to match a list of sub-skills with reading test items; the sub-skill that the majority of the respondents identified as one to have been tested by a particular test item was taken as a valid response. Next, the sub-skills, thus, identified along with the test items were given to the respondents so as to rate the level of their operationalization at word, sentence, and discourse level. Likewise, the levels of operationalization with which the majority of the respondents agreed were taken as valid responses. However, the levels with which the respondents failed to agree were made subjects of discussion among TEFL PhD scholars. On the basis of the discussion outcomes, attempts were made to make modifications to the test items.

The results of test validation process

A further analysis showed that there was imbalance in the proportion of sub-skills and test items. This necessitated adding more test items to strike the balance between the sub-skills. Hence, we may realize that the sub-skill and test item proportion was maintained.

  • Understanding functional value of sentences and paragraphs, 5 items

  • Recognizing the presuppositions underlying the text, 6 items

  • Recognizing implications and making inference, 5 items

  • Recognizing text structure, 5 items

  • Interpreting discourse markers, 5 items

  • Understanding writer’s tone and purpose, 5 items

  • Identifying main ideas, 6 items

  • Interpreting lexical cohesion, 5 items

  • Inferring/ guessing meanings of words from context, 5 items

  • Identifying referents/ antecedents, 6 items

Next, the revised test items were tried out on learners and this revealed that there were unnoticed ambiguities and unclear instructions that brought about negative discrimination and poor discrimination. Besides, some of the items were found to be too easy for learners to answer. As a result, the researcher brought up these issues for discussions among the scholars mentioned earlier and they suggested ways in which the test items could be made more efficient.

Selection of the reading sub-skills

Research in reading skills had reached its climax from late 1970s to early 1990s (Urquhart & Weir, 1998). The main concern of these studies was identifying the sub-skills that readers experience during the reading process. Researchers who participated in the skills taxonomy studies included: Davis (1968), Lunzer et al. (1979), Munby (1978), Frydenberg (1982), and Grabe (1991). Each of these researchers provided varied specifications of reading sub-skills. The sub-skills for the current study were drawn from Nuttall’s (1996) specifications. Nuttal tried to categorize the sub-skills under two main categories: word-attack skills and text-attack skills. The sub-skills that fall under word-attack skills include: identifying referents/ antecedents, inferring/ guessing meanings of words from context, interpreting lexical cohesion, and interpreting discourse markers. On the other hand, those sub-skills that come under text-attack skills are: understanding functional value of sentences and paragraphs, recognizing the presuppositions underlying the text, recognizing implications and making inferences, recognizing text structure, understanding writer’s tone and purpose, and identifying main ideas. In general, the researchers mentioned earlier conceptualize sub-skills as specific behaviours that readers perform while trying to extract the meanings contained in reading texts.

Results and discussions

Subjects’ performance in the cloze-test

Of 89 test-takers who sat for the cloze test, only 60 of them completed it efficiently. As in Table 1, the reading levels seemed to form a pyramid shape as more subjects constituted the base of the pyramid and their numbers decreased as we moved upward. In other words, a sizeable proportion (38%) of subjects’ reading comprehension appeared to be at frustration level. According to Rankin and Culhane (1969), the subjects at frustration level might have found the reading text in the cloze test too difficult to understand. This might suggest that they might not understand it even if they got assistance from their instructors. At the same time, almost a similar proportion (35%) of subjects’ reading comprehension seemed to fall under instructional level. Again, according to the above authorities, instructional level indicated that the reading text in the cloze test appeared to be at the right level of difficulty for the particular group of subjects. Hence, they could understand the reading material with little assistance from their instructors. Eventually, we may notice that a relatively small proportion (27%) of subjects’ reading comprehension fell under independent level. This means that, according to the authorities mentioned above, the reading text used in the cloze tests might be too easy to understand for the subjects who attained the independent level. That is to say, the subjects at this level of reading might not need any assistance rather they could understand the text on their own. Hence, the finding of the study was consistent with that of Wendiyfraw et al. (2016).

Table 1 Subjects’ range of scores and proportion across reading levels

Subjects’ performance in the reading sub-skills test

The Reading Sub-skills Test was composed of three parts whose test items were drawn from three reading texts. The test items were designed in such a way that they tested different sub-skills. Of 89 students who sat for the test, only 59 students successfully completed the test and their test scores were taken for computation. As in Table 2, the cut-off points were different from that of the cloze test. Rather, they were based on those of Wendiyfraw et al.’s (2016) study. When we see the proportion of the test-takers across the ability groups, high-scorers were smaller than the other two groups. The scarcity of high-scorers seemed to be common in Ethiopian universities as it was the case with Wendiyfraw et al.’s (2016) findings.

Table 2 Subjects’ range of scores and proportion across ability groups

Sub-skills performance of different ability groups in the reading sub-skills test

Word level sub-skills performance of different ability groups in the reading sub-skills test

Table 3 included sub-skills that required operations at word level. As in the data, high-scorers seemed to have little difficulty with guessing meanings of words from context (94%), less difficulty with interpreting lexical cohesion (82%) and identifying referents (79%). However, average scorers appeared to have a different performance. They seemed to have less difficulty with interpreting lexical cohesion (76%). At the same time, they appeared to have some difficulty with guessing meanings of words from context (66%) and identifying referents (64%). When we examine low scorers’ scores, they seemed to perform differently from high and average scorers. That is to say, low scorers appeared to have some difficulty with interpreting lexical cohesion (57%), but much difficulty with identifying referents (37%) and guessing meanings of words from context (26%).

Table 3 Word level sub-skills performance of different ability groups

Sub-skills performance of different ability groups in the reading sub-skills test at sentence level

Table 4 contained the sub-skills that required operations at sentence level. As in the data, high scorers looked to have little difficulty with recognizing presuppositions underlying the text (92%), less difficulty with recognizing implications and making inference (85%), and some difficulty with understanding functional value of sentences (54%). However, average scorers seemed to have less difficulty with recognizing presuppositions underlying the text (78%), some difficulty with recognizing implications and making inference (50%), and much difficulty with understanding functional value of sentences (41%). On top of that, low-scorers appeared to have some difficulty with recognizing presuppositions underlying the text (52%), and much difficulty with the rest of the sub-skills: understanding functional value of sentences (39%), and recognizing implications and making inferences (26%).

Table 4 Sentence level sub-skills performance of different ability groups

Sub-skills performance of different ability groups in the reading sub-skills test at discourse level

Table 5 contains sub-skills that operate at discourse level. When we observe the performance of high-scorers, they seemed to have less difficulty with a number of sub-skills: identifying main ideas (78%), understanding writer’s tone and purpose (75%), and recognizing implications and making inferences (72%). At the same time, they appeared to have some difficulty with recognizing text structure (69%), understanding the functional value of paragraphs (69%), interpreting discourse markers (68%), and recognizing presuppositions underlying the text (65%).

Table 5 Discourse level sub-skills performance of different ability groups

Looking at average-scorers’ performance, we could realize that they seemed to have quite different performance. Average-scorers looked to have some difficulty with identifying main ideas (65%), interpreting discourse markers (62%), recognizing implications and making inferences (61%), and understanding writer’s tone and purpose (57%). However, they seemed to have much difficulty with understanding functional value of paragraphs (48%), recognizing text structure (46%), and recognizing presuppositions underlying the text (45%).

As opposed to the other groups, low-scorers appeared to have a very different performance. As in the data, they seemed to have some difficulty with understanding writer's tone and purpose (60%), identifying main ideas (54%), and interpreting discourse markers (52%). Nevertheless, they appeared to have much difficulty with a number of sub-skills: recognizing implications and making inferences (45%), recognizing text structure (43%), and recognizing presuppositions underlying the text (41%). However, they exhibited to have a great deal of difficulty with understanding functional value of paragraphs (17%).

Sub-skills performance of subjects at different reading levels in the reading sub-skills test

Word level sub-skills performance of subjects at different reading levels

As can be observed in Table 6, subjects belonging to independent reading level seemed to have little difficulty with guessing meanings of words from context (95%), and less difficulty with interpreting lexical cohesion (81%) and indentifying referents (80%). However, subjects at instructional level appeared to have performed differently. They seemed to have some difficultly with all of the three sub-skills: interpreting lexical cohesion (69%), indentifying referents (54%), and guessing meanings of words from context (53%). Similarly, subjects at frustration level appeared to have a different performance. They looked to have some difficulty with interpreting lexical cohesion (63%) while they seemed to have much difficulty with indentifying referents (36%) and guessing meanings of words from context (28%).

Table 6 Subjects’ performance of sub-skills operating at word level by reading levels

Sentence level sub-skills performance of subjects at different reading levels

As in Table 7, subjects at independent reading level seemed to have less difficulty with recognizing presuppositions underlying the text (88%), and recognizing implications and making inferences (72%). However, they appeared to have much difficulty with understanding the functional value of sentences (29%). At the same time, subjects at instructional level seemed to perform differently. They appeared to have some difficulty with recognizing presuppositions underlying texts (67%) while they had much difficulty with recognizing implications and making inferences (48%) and understanding the functional value of sentences (43%). Similarly, subjects at frustration level looked to have some difficulty with recognizing presuppositions underlying the text (65%) but they had much difficulty with the rest of the sub-skills: understanding the functional value of sentences (35%) and recognizing implications and making inferences (33%).

Table 7 Subjects’ performance of sub-skills operating at sentence level by reading levels

Discourse level sub-skills performance of subjects at different reading levels

As in Table 8, learners who achieved independent reading level seemed to have less difficulty with indentifying main ideas (76%), and recognizing implications and making inferences (73%). Besides, they appeared to have some difficulty with the rest of the sub-skills: understanding writer’s tone and purpose (69%), recognizing text structure (64%), understanding functional value of paragraphs (63%), interpreting discourse markers (59%), and recognizing presuppositions underlying the text (59%).

Table 8 Subjects’ performance of sub-skills operating at discourse level by reading levels

However, learners belonging to instructional reading level appeared to have performed a bit differently. Evidently, they seemed to have some difficulty with a number of sub-skills: interpreting discourse markers (63%), indentifying main ideas (59%), understanding writer’s tone and purpose (57%), understanding functional value of paragraphs (55%), and recognizing implications and making inferences (54%). Nevertheless, they had much difficulty with recognizing presuppositions underlying the text (45%), and recognizing text structure (43%).

When we carefully observe the data of learners at frustration reading level, we could realize a slight change in performance. They seemed to have some difficulty with some of the sub-skills: understanding writer’s tone and purpose (53%), interpreting discourse markers (52%), and indentifying main ideas (51%), and recognizing implications and making inferences (51%). On the other hand, they seemed to have much difficulty with recognizing text structure (36%), and recognizing presuppositions underlying the text (35%). Unlike learners in two of the other reading levels, these seemed to have a great deal of difficulty with understanding functional value of paragraphs (15%).

Correlations between tests

Before attempting to work out the relationship between reading levels and different ability groups, it is logical to see what kind of relationship exists between the tests. As can be observed in Table 9 above, we may realize that the two tests had positive correlation. That is to say, we may expect a linear increase of test-takers’ scores in both test. Regarding the strength of relationship, it appeared that Kendall’s tau yielded high or strong correlations (0.987) between the tests. Besides, the p value showed that the correlation between the tests was significant at 0.01 significance level. This might suggest that the tests measured the same behavior even though they had different theoretical basis.

Table 9 Relationship between reading sub-skills test and cloze test

Relationship between reading levels in cloze test and ability groups in reading sub-skills test

As in Table 10, it appears that Kendalle’s’ tau_b correlation yielded that the strength of relationship between high-scorers in Reading Sub-skills Test and test-takers at independent reading level in the Cloze Test was found to be moderate (0.455). Similarly, the strength of relationship between average-scorers in Reading Sub-skills Test and test-takers at instructional reading level in the Cloze Test was also moderate (0.477). Besides, the p values for both relationships were found to be significant at 0.05 for the former and at 0.01 level for the latter. This means that almost half of the students in both relationships were able to maintain similar scorers in the tests. This finding might suggest that the componential and unitary views shared some commonalities despite their rivalry. However, Kendalle’s tau_b revealed that the relationship between low-scorers in Reading Sub-skills Test and test-takers at frustration level in Cloze Test was very weak (0.03) and the p value also showed that the relationship was insignificant or almost no relationship. This could perhaps be attributed to the disparity or disjunction between the cut-off points for both tests. That is to say, low-scorers’ scores ranged from 40 to 49 whereas those of students at frustration reading level were from 26 to 38.

Table 10 Correlations between Reading levels and ability groups in reading sub-skills test

Discussions on the findings

Word level sub-skills performance and relationship

While comparing Tables 3 and 6, the sub-skills performance of high-scorers in the Reading Sub-skills Test and that of students at independent reading level was nearly the same. High-scorers and subjects at independent reading level had little difficulty with guessing meanings of words from context (95% and 94%), less difficulty with interpreting lexical cohesion (81% and 82%), and identifying referents (79% and 80%). When students have less difficulty with lexical cohesion, this may suggest that they have the ability to identify lexical items which fall under the same semantic field or contribute to maintain lexical meaning (Baker & Ellece, 2011; Morley, 2009). Consequently, we might expect high correlations between two groups when their performance was much the same or similar. However, we may see in Table 10 that the strength of relationship between the two groups was moderate (0.455) and the p value (0.019) showed that the relationship was significant at 0.05 level.

When we observe the performance of average-scorers and students at instructional reading level, they seemed to have the same performance except for interpreting lexical cohesion with which they had less and some difficulty. This was supported by the data in Table 10 which showed that the strength of relationship between average-scorers and students at instructional reading level was moderate (0.477) and the p value (0.002) showed that the relationship was significant at 0.01 level.

Similarly, we could observe (in Tables 3 and 6) the same pattern of performance between low-scorers and students at frustration reading level. They seemed to have some difficulty with interpreting lexical cohesion (57% and 63%), and much difficulty with identifying referents (37% and 36%) and guessing meanings of words from context (26% and 28%). Nevertheless, we could realize there was almost no relationship (0.03) between low-scorers and students at frustration reading level in Table 10. This could be attributed to the great variations in the range of scores in both tests (low-scorers: 40–49; frustration reading level: 26–38). Consequently, we could realize the presence of more similarities than differences between Reading Sub-skills Test and Cloze Test although they appeared to have contradictory theoretical bases (componential approach and unitary approach).

Sentence level sub-skills performance and relationship

As shown in Tables 4 and 7, high-scorers in Reading Sub-skills Test and test-takers at independent reading level seemed to have some variations in their performance. For example, high scorers looked to have little difficulty with recognizing presuppositions underlying the text (92%) while learners at independent reading level had less difficulty with it (88%). This means that both groups seemed to have little differences in their ability to construct presupposed meaning that they commonly shared with the writer of the particular text (Khaleel, 2010; Tyler, 1978). Similarly, they differed in their performance of understanding functional value of sentences with which the former had less difficulty (54%) whereas the latter had much difficulty (29%). Despite these differences, they seemed to have less difficulty with recognizing implications and making inference (85% and 72%). Making inference may require learners to read between the lines and work out the implied meaning beyond the text (Davoudi, 2005; Preszler, 2006; Warnidah & Suwarno, 2016). Even so, both groups seemed to have less difficulty with the particular sub-skill. This difference in performance seemed to have been supported by the moderate (0.455) relationship between the two groups in Table 10.

Similarly, average-scorers in Reading Sub-skills Test and test-takers at instructional reading level in Cloze Test had some variations in their performance. For instance, average-scorers appeared to have less difficulty with recognizing presuppositions underlying the text (78%) while test-takers at instructional reading level had some difficulty with it (67%). On top of that, average-scorers seemed to have some difficulty with recognizing implications and making inferences (50%), but students at instructional reading level had much difficulty with it (48%). In spite of these differences, they seemed to have a similarity: both groups appeared to have much difficulty with understanding functional value of sentences (41% and 43%), respectively. This finding was supported by the moderate (0.477) relationship between the two groups in Table 10 and the p value (0.002) indicated that the relationship was significant at 0.01 level.

However, low-scorers in Reading Sub-skills Test and test-takers at frustration reading level in Cloze Test looked to have nearly the same performance. For example, both groups had some difficulty with recognizing presuppositions underlying the text (52% and 65%), and much difficulty with understanding functional value of sentences (39% and 35%), and recognizing implications and making inferences (26% and 33%), respectively. However, this was inconsistent with the very weak (0.03) or almost no relationship between the two groups in Table 10.

Discourse level sub-skills performance and relationship

When we compare Tables 5 and 8, high-scorers in Reading Sub-skills Test and students at independent reading level in Cloze Test seemed to have little variations in performance. For instance, high-scorers and students at independent reading level had less difficulty with two of the sub-skills: identifying main ideas (78% and 76%), and recognizing implications and making inferences (72% and 73%), respectively. When students have less difficulty with identifying main ideas, it may suggest that both groups have the ability to distinguish main ideas from specific details (Duke & Pearson, 2008; Jitendra et al., 2001; Watson et al., 2012). At the same time, both groups seemed to have some difficulty with the rest of the sub-skills: recognizing text structure (69% and 64%), understanding the functional value of paragraphs (69% and 63%), interpreting discourse markers (68% and 59%), and recognizing presuppositions underlying the text (65% and 59%), respectively. Nevertheless, Kendeou and van den Broek (2007) claimed that proficient readers make use of text structure to organize textual content in their memory to attain better understanding. Besides, in Table 10 it was reported that the strength of relationship between the groups was found to be moderate (0.455). This might suggest that the test scores of the subjects who belonged to both groups had variations even though they might have similar performances with respect to the sub-skills.

Similarly, average-scorers in Reading Sub-skills Test and students at instructional reading level in Cloze Test appeared to have little variations in their performance of the sub-skills. For example, both groups seemed to have some difficulty with identifying main ideas (65% and 59%), interpreting discourse markers (62% and 63%), recognizing implications and making inference (61% and 54%), and understanding writer’s tone and purpose (57% and 57%), respectively. Similarly, both groups seemed to have much difficulty with recognizing text structure (46% and 43%), and recognizing presuppositions underlying the text (45% and 45%), respectively. This result can be accounted by the moderate relationship (0.477) between the two groups in Table 10.

Furthermore, low-scorers in Reading Sub-skills Test and students at frustration level in Cloze Test seemed to have some variations in their performance. For instance, both groups appeared to have some difficulty with understanding writer's tone and purpose (60% and 53%), identifying main ideas (54% and 51%), and interpreting discourse markers (52% and 52%). At the same time, the two groups seemed to have much difficulty with recognizing text structure (43% and 36%), and recognizing presuppositions underlying the text (41% and 35%). This finding was consistent with Meyer et al. (1980) and Rapp et al. (2007) whose research outcome suggested that struggling readers usually never relied on text structure to extract meaning from reading texts. However, both groups happened to have differences in performance with the rest of the sub-skills. In Table 10, it was reported that there was almost no relationship (0.03) between the two groups. This could be attributed to the differences in the cut-off points of both groups.

Implications and limitations of the study

Since the controversy over reading being composed of specific sub-skills or undifferentiated unitary entity is still unresolved, the findings of the current study might make researchers to reconsider the issue and conduct further research on it. In the last paragraph of the introductory section, it is recalled that researchers’ attention was diverted from doing their research on the controversy to focusing on the performance of learners on specific reading sub-skills. In other words, the findings of this research might serve as a reminder for future researchers to redirect the focus of their study on the controversy between the two rival theories.

At the same time, the limitations of the current study could be attributed to (1) the disparity between the cut-off points between the tests which resulted from the strict adherence to cut-off points for the Cloze Test and (2) the lack of high-scoring students majoring in English language. In this regard, future researchers can minimize these limitations by applying equivalent cut-off points between tests and maintaining proportional number of subjects between different ability groups and reading levels.

Conclusions

As we recall, the purpose of the current study was to find out (1) how subjects at different ability groups and reading levels performed with respect to reading sub-skills at word, sentence, and discourse level and (2) what kind of relationship existed between the ability groups and reading levels. Hence, this section was aimed at drawing conclusions with respect to the specific research questions.

The 1st question was about how subjects at different ability groups in Reading Sub-skills test performed with respect to sub-skills at word, sentence, and discourse level. In Table 3, the sub-skills at word level didn’t pose any difficulty for high-scorers and average-scorers except low-scorers who happened to have much difficulty with identifying referents/antecedents (37%) and guessing meanings of words from context (26%). Hence, sub-skills at word level might not be troublesome for most of the learners. In Table 4, we could notice a different performance. Here, only average-scorers and low-scorers appeared to have much difficulty with the sub-skills. Specifically, average-scorers and low-scorers had much difficulty with understanding functional value of sentences (41% and 39%), respectively. Furthermore, low-scorers seemed to have much difficulty with recognizing implications and making inference (26%). Thus, we may realize that sentence level sub-skills might pose difficulty for average-scorers and low-scorers. Table 5 showed that average-scorers and low-scorers seemed to have much difficulty with three and four of the sub-skills at discourse level. Specifically, both groups appeared to have much difficulty with three of the sub-skills: understanding functional value of paragraphs (48% and 17%), recognizing presuppositions underlying the text (45% and 41%), and recognizing text structure (46% and 43%), respectively. Besides, low-scorers had much difficulty with recognizing implications and making inference (45%). Hence, we may understand that sub-skills at discourse level were more difficulty posing for average-scorers and low-scorers than high-scorers.

The 2nd research question was about how subjects at different reading levels in the Cloze Test performed with respect to sub-skills at word, sentence, and discourse level in the Reading Sub-skills Test. As in Table 6, only test-takers at frustration reading level seemed to have much difficulty with two of the sub-skills: identifying referents/antecedents (36%) and guessing meanings of words from context (28%). This may suggest that only students at frustration reading level had much difficulty with sub-skills that operated at word level. In Table 7, we could see a very different picture. Students at independent, instructional, and frustration reading levels appeared to have much difficulty with understanding functional value of sentences (48%, 43%, 35%), respectively. In addition, subjects at instructional and frustration reading level had much difficulty with recognizing implications and making inference (48%, 33%), respectively. Hence, we may realize that the sub-skills at sentence level could pose much difficulty to all of the subjects irrespective of their reading levels. However, in Table 8 we could notice a slightly different performance. Only subjects at instructional and frustration reading level seemed to have much difficulty with two of the sub-skills: recognizing presuppositions underlying the text (45%, 35%) and recognizing text structure (43%, 36%), respectively. At the same time, subjects at frustration reading level appeared to have a great deal of difficulty with understanding functional value of paragraphs (15%). Unlike the data in Table 7, sub-skills at discourse level happened to pose difficulty only to subjects at instructional and frustration reading levels.

The 3rd research question was concerned with finding out relationships between subjects at different reading levels in Cloze Test and different ability groups in Reading Sub-skills Test. As in Table 10, it was found out that test-takers at independent reading level and high-scorers seemed to have moderate (0.455) relationship; subjects at instructional reading level and average-scorers had also moderate (0.477) relationship; but learners at frustration level and low-scorers had almost no (0.03) relationship. Despite the moderate and absence of relationship, it was indicated in “Correlations between Tests” that both tests had strong (0.987) relationship. This disparity could be attributed to the small number of subjects at different reading levels and ability groups. Thus, even though both tests had contrary theoretical basis it was evidently shown that they had much more commonalities than differences.

Consequently, these findings might have direct relevance to the field of language teaching in general and curriculum designers, textbook or module writers, instructors, and students in particular. The reasons are explained in the recommendations section.

Revised conclusion

The 1st question was about how subjects at different ability groups in Reading Sub-skills Test performed with respect to sub-skills at word, sentence, and discourse level. At word level, the sub-skills clearly didn’t pose any difficulty for high-scorers and average-scorers. On the other hand, low-scorers had much difficulty with identifying referents/antecedents and guessing meanings of words from context. Hence, sub-skills at word level might not be troublesome for most of the learners. At sentence level, we could notice a different performance. Only average-scorers and low-scorers had much difficulty with understanding functional value of sentences. Furthermore, low-scorers seemed to have much difficulty with recognizing implications and making inference. Thus, we may realize that sentence level sub-skills might pose difficulty for average-scorers and low-scorers. At discourse level, average-scorers and low-scorers seemed to have much difficulty with three and four of the sub-skills. Hence, we may understand that sub-skills at discourse level were more difficulty posing for average-scorers and low-scorers than high-scorers.

The 2nd research question was about how subjects at different reading levels in the Cloze Test performed with respect to sub-skills at word, sentence, and discourse level in the Reading Sub-skills Test. At word level, only test-takers at frustration reading level seemed to have much difficulty with identifying referents/antecedents and guessing meanings of words from context. This may suggest that only students at frustration reading level had much difficulty with sub-skills that operated at word level. At sentence level, we could see a very different picture. Participants of the study unanimously appeared to have much difficulty with understanding functional value of sentences. In addition, subjects at instructional and frustration reading level had much difficulty with recognizing implications and making inference. Hence, we may realize that the sub-skills at sentence level could pose much difficulty to all of the subjects irrespective of their reading levels. At discourse level, however, we could notice a slightly different performance. Only subjects at instructional and frustration reading level seemed to have much difficulty with recognizing presuppositions underlying the text and recognizing text structure. At the same time, subjects at frustration reading level appeared to have a great deal of difficulty with understanding functional value of paragraphs. Unlike the data in Table 7, sub-skills at discourse level happened to pose difficulty only to subjects at instructional and frustration reading levels.

The 3rd research question was concerned with finding out relationships between subjects at different reading levels in Cloze Test and different ability groups in Reading Sub-skills Test. As in Table 10, it was found out that test-takers at independent reading level and high-scorers on one hand and subjects at instructional reading level and average-scorers on the other seemed to have moderate relationship. Nevertheless, learners at frustration level and low-scorers had almost no relationship. Despite the moderate and absence of relationship, it was indicated in “Correlations between Tests” that both tests had strong relationship. This disparity could be attributed to the small number of subjects at different reading levels and ability groups. Thus, even though both tests had contrary theoretical basis it was evidently shown that they had much more commonalities than differences.

Consequently, these findings might have direct relevance to the field of language teaching in general and curriculum designers, textbook or module writers, instructors, and students in particular. The reasons are explained in the recommendations section.

Recommendations

As the data revealed, average-scorers and low-scorers on one hand and students at instructional reading level and those at frustration reading level on the other seemed to have experienced much difficulty with sub-skills at word, sentence, and discourse level with varying degrees. That is to say, average-scorers and test-takers at instructional level appeared to have much difficulty with some of the sub-skills while low-scorers and learners at frustration reading level seemed to have much difficulty with almost all of the sub-skills. These findings may have implications for curriculum designers, textbook or module writers, instructors, and students. Therefore, the following recommendations are made:

  1. 1)

    Curriculum designers or course designers should realize that students at different ability groups and reading levels had much difficulty with specific type of reading sub-skills. Hence, they should give more coverage to the sub-skills with which students had much difficulty.

  2. 2)

    Course book writers should select appropriate reading texts while designing the reading course and see that the texts selected should treat the sub-skills with which average-scorers, low-scorers, students at instructional reading level, and those at frustration level had much difficulty.

  3. 3)

    Instructors who give reading courses and Communicative English Skills course should give more coverage and emphasis to those sub-skills with which average-scorers, low-scorers, students at instructional reading level, and those at frustration level had much difficulty. In other words, they should design more tasks treating those sub-skills so that the students could practice them in and outside the classroom.

  4. 4)

    Students should assign some of their time to practice the sub-skills with which they seemed to have much difficulty. Besides academic texts, they should develop the habit of reading different types of reading texts such as novels, short stories, magazines, newspapers, etc. in order to alleviate their difficulties with reading sub-skills.

Availability of data and materials

The data can be made available on request at any time.

References

  • Alderson, J. C. (1990a). Testing reading comprehension skills (part one). Reading in a Foreign Language, 6(2), 425–438.

    Google Scholar 

  • Alderson, J. C. (1990b). Testing reading comprehension skills (part two). Reading in a Foreign Language, 7(1), 465–503.

    Google Scholar 

  • Alderson, J. C., & Lukmani, Y. (1989). Cognition and reading: Cognitive levels as embodied in test questions. Reading in a Foreign Language., 5(2), 253–270.

    Google Scholar 

  • Baker, P., & Ellece, S. (2011). Key terms in discourse analysis. Continuum.

    Google Scholar 

  • Carver, R. P. (1992). What do standardized tests of reading comprehension measure in terms of efficiency, accuracy and rate? Reading Research Quarterly, 27, 347–359.

    Google Scholar 

  • Davey, B. (1988). Factors affecting the difficulty of reading comprehension items for successful and unsuccessful readers. Experimental Education, 56, 67–76.

    Article  Google Scholar 

  • Davis, F. B. (1968). Research in comprehension in reading. Reading Research Quarterly, 3, 499–545.

    Article  Google Scholar 

  • Davoudi, M. (2005). Inference generation skills and text comprehension. The Reading Matrix, 5(1), 106–108.

    Google Scholar 

  • Drum, P. A., Calfee, R. C., & Cook, L. K. (1981). The effect of surface structure variables on performance in reading comprehension tests. Reading Research Quarterly, 16, 486–514.

    Article  Google Scholar 

  • Duke, N. K., & Pearson, P. D. (2002). Effective practices for developing reading comprehension. In A. E. Farstrup & S. J. Samuels (Eds.), What research has to say about reading Instruction (3rd ed., pp. 205–242). International Reading Association.

    Google Scholar 

  • Duke, N. K., & Pearson, P. D. (2008). Effective practices for developing reading comprehension. The Journal of Education, 189(1/2), 107–122.

    Google Scholar 

  • Fisher, R. (2005). Teaching children to think. Nelson Thornes Ltd.

    Google Scholar 

  • Frydenberg, G. (1982). Designing an ESP reading skills course. ELT Journal., 36(3), 156–163.

    Article  Google Scholar 

  • Geva, E., & Farnia, F. (2012). Developmental changes in the nature of language proficiency and reading fluency paint a more complex view of reading comprehension in ELL and EL1. Reading and Writing, 25, 1819–1845. https://doi.org/10.1007/s11145-011-9333-8

    Article  Google Scholar 

  • Gottardo, A., & Mueller, J. (2009). Are first- and second-language factors related in predicting second-language reading comprehension? A study of Spanish-speaking children acquiring English as a second language from first to second grade. Journal of Educational Psychology, 101, 330–344. https://doi.org/10.1037/a0014320

    Article  Google Scholar 

  • Grabe, W. (1991). Current developments in second language reading research. TESOL Quarterly, 25, 375–406.

    Article  Google Scholar 

  • Heaton, J. B. (1988). Writing English language tests. Foreign Language Teaching and Research Press.

    Google Scholar 

  • Hessamy, G., & Sadeghi, S. (2013). The relative difficulty and significance of reading skills. International Journal of English Language Education, 1(3), 208–222.

    Google Scholar 

  • Hughes, A. (1989). Testing for language teachers. Cambridge University Press.

    Google Scholar 

  • Jitendra, A. K., Chard, D., Hoppes, M. K., Renouf, K., & Gardill, M. C. (2001). An Evaluation of main ideas strategy instruction in four commercial reading programs: Implications for students with learning problems. Reading & Writing Quarterly, 17, 53–73. https://doi.org/10.1080/105735601455738

    Article  Google Scholar 

  • Kendeou, P., & van den Broek, P. (2007). The effects of prior knowledge and text structure on comprehension processes during reading of scientific texts. Memory & Cognition, 35(7), 1567–1577. https://doi.org/10.3758/BF03193491

    Article  Google Scholar 

  • Khaleel, L. M. (2010). An analysis of presupposition triggers in English journalistic texts. Journal of College of Education for Women, 21(2), 523–551.

    Google Scholar 

  • Kim, Y. H., & Jang, E. E. (2009). Differential functioning of reading sub-skills on the OSSLT for L1 and ELL students: A multidimensionality model-based DBF/DIF approach. Language Learning, 59(4), 825–865.

    Article  Google Scholar 

  • Kim, Y. G., & Piper, B. (2019). Component skills of reading and their structural relations: Evidence from three Sub-Saharan African languages with transparent orthographies. Journal of Research in Reading, 42(2), 326–348.

    Article  Google Scholar 

  • Kirby, J. R., & Savage, R. S. (2008). Can the simple view deal with the complexities of reading? Literacy, 42, 75–82. https://doi.org/10.1111/j.1741-4369.2008.00487.x

    Article  Google Scholar 

  • Lie, M., et al. (2020). The componential model of reading in bilingual learners. Journal of Educational Psychology. https://doi.org/10.1037/edu0000459

    Article  Google Scholar 

  • Liu, F. (2010). Reading abilities and strategies: A short introduction. International Education Studies, 3(3), 153–157.

    Google Scholar 

  • Lumley, T. J. N. (1993). Reading comprehension sub-skills: Teachers’ perceptions of content in an EAP test. Melbourne Papers in Applied Linguistics, 2(1), 25–55.

    Google Scholar 

  • Lunzer, E., Waite, M., & Dolan, T. (1979). Comprehension and comprehension tests. In E. Lunzer & K. Garner (Eds.), The effective use of reading. Heinemann Educational Books.

    Google Scholar 

  • Meyer, B. J. F., Brandt, D. M., & Bluth, G. J. (1980). Use of top-level structure in text: Key for reading comprehension of ninth-grade students. Reading Research Quarterly, 16(1), 72–103. https://doi.org/10.2307/747349

    Article  Google Scholar 

  • Morley, J. (2009). Lexical cohesion and rhetorical structure. In J. Flowerdew & M. Mahlberg (Eds.), Lexical cohesion and corpus linguistics (pp. 5–20). John Benjamins.

    Chapter  Google Scholar 

  • Munby, J. (1978). Communicative syllabus design. Cambridge University Press.

    Google Scholar 

  • Nuttal, C. (1996). Teaching reading skills in a foreign language. Heinemann Educational Books.

    Google Scholar 

  • Oller, J. W. (1979). Language tests at school: A pragmatic approach. Longman.

    Google Scholar 

  • Pearson, P. D., Roehler, L. R., Dole, J. A., & Duffy, G. G. (1992). Developing expertise in reading comprehension. In S. J. Samuels & A. E. Farstrup (Eds.), What research has to say about reading instruction (2nd ed., pp. 145–199). International Reading Association.

    Google Scholar 

  • Pollitt, A., Hutchinson, C., Entwistle, N., & DeLuca, C. (1985). What makes exam questions difficult? An analysis of ‘O’ grade questions and answers. Scottish Academic Press.

    Google Scholar 

  • Preszler, J. (2006) On target: Strategies to make readers make meaning through inferences (Grade 4–12). ESA Regions 6&7. Retrieved from http://www.rainbowschools.ca/virtual_library/tea

  • Proctor, C. P., Carlo, M., August, D., & Snow, C. (2005). Native Spanish-speaking children reading in English: Toward a model of comprehension. Journal of Educational Psychology, 97, 246–256. https://doi.org/10.1037/0022-0663.97.2.246

    Article  Google Scholar 

  • Rankin, E. F., & Culhane, J. W. (1969). Comparable cloze and multiple-choice comprehension test scores. Journal of Reading, 13, 193–198.

    Google Scholar 

  • Rapp, D. N., van den Broek, P., McMaster, K. L., Kendeou, P., & Espin, C. A. (2007). Higher-order comprehension processes in struggling readers: A perspective for research and intervention. Scientific Studies of Reading, 11(4), 289–312. https://doi.org/10.1080/10888430701530417

    Article  Google Scholar 

  • Rosenshine, B. V. (1980). Skills hierarchies in reading comprehension in Spiro et al. (pp. 535–554). https://www.taylorfrancis.com/chapters/edit/10.4324/9781315107493-29/skill-hierarchies-reading-comprehension-barak-rosenshine

  • Rost, D. H. (1993). Assessing the different components of reading comprehension: Fact or fiction. Language Testing, 10(1), 79–92.

    Article  Google Scholar 

  • Taylor, W. (1956). Recent developments in the use of the cloze procedure. Journalism Quarterly, 33, 42–48.

    Article  Google Scholar 

  • Tyler, S. A. (1978). The said and the unsaid. Academic Press.

    Google Scholar 

  • Urquhart, S., & Weir, C. (1998). Reading in second language: Process, product and practice. Longman.

    Google Scholar 

  • Warnidah, N., & Suwarno, B. (2016). Students’ difficulties in making inferences in reading narrative passages at the social eleventh grade of Sman 1 Curup. Journal of Applied Linguistics and Literature, 2(2), 78–94.

    Google Scholar 

  • Watson, S. M. R., Gable, R. A., Gear, S. B., & Hughes, K. C. (2012). Evidence-based strategies for improving the reading comprehension of secondary students: Implications for students with learning disabilities. Learning Disabilities Research & Practice, 27, 79–89. https://doi.org/10.1111/j.1540-5826.2012.00353.x

    Article  Google Scholar 

  • Wendiyfraw, W., Abebe, T., & Adane, P. (2016). The English language proficiency level of first year students in Dilla university. The Ethiopian Journal of Education, 36(1), 111–147.

    Google Scholar 

  • Wier, C. J., & Porter, D. (1994). The multi-divisible or unitary nature of reading: The language tester between Scylla and Charybdis. Reading in a Foreign Language, 10(2), 1–19.

    Google Scholar 

Download references

Acknowledgements

I would like to thank Dr. Abate Anjulo, head of the English Department, for his help during data collection.

Funding

This research is funded by Arba Minch University mainly for data collection and analysis.

Author information

Authors and Affiliations

Authors

Contributions

Not applicable as there is only one author. All authors have read and approved the final version of the manuscript

Author’s information

Wendiyfraw Wanna received his BA, MA, and Ph.D. degrees from Addis Ababa University. The author had taught English for 15 years in different secondary schools in Ethiopia. He has also been giving TEFL courses to undergraduate and postgraduate students in Dilla University and Arba Minch University since 2006. Besides, he has been engaged in conducting research on different issues of language teaching and serving as a supervisor for many MA and Ph.D. students.

Corresponding author

Correspondence to Wendiyfraw Wanna.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: Cloze Test

figure a
figure b

Appendix 2

figure c
figure d
figure e
figure f
figure g
figure h
figure i
figure j

Appendix 3

figure k
figure l
figure m
figure n
figure o

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wanna, W. The interface between unitary hypothesis and componential approach to testing reading skills: do subjects show similar levels of performance with respect to specific reading sub-skills in tests representing both theories? Descriptive correlational study. Asian. J. Second. Foreign. Lang. Educ. 7, 22 (2022). https://doi.org/10.1186/s40862-022-00149-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40862-022-00149-2

Keywords