Skip to main content
  • Original article
  • Open access
  • Published:

The language deficit: a comparison of the critical thinking skills of Asian students in first and second language contexts

Abstract

With a growing number of Asian students attending Western universities, the difficulties they seem to face in adapting to a new academic environment has provoked much discussion amongst educators, particularly with regard to the critical thinking (CT) skills. Many educators have claimed that, as a result of their cultural and educational backgrounds, Asian students lack the CT skills essential for academic tasks such as essay writing and debates. Other researchers, however, have argued this is due simply to the disadvantages of carrying out studies in a foreign language. In fact, there have been surprisingly few studies directly comparing Asian students’ CT skills in their first compared to their second languages. Those that have been done have tended to employ standardised CT tests which, in their discrete, short-answer format, do not accurately reflect the tasks students carry out in university courses. In this study, therefore, two classes of Japanese university students, all with TOEFL scores high enough to enter Western universities, were asked to carry out an oral and written debate, one class in Japanese and the other in English. Evaluations of their performances by independent raters revealed stark differences between the two classes in their ability to construct and deconstruct arguments, find logical inconsistencies and express themselves clearly and persuasively.

Introduction

In the midst of a rapidly changing world, critical thinking has become one of the key attributes demanded of students in higher education. It has long been contended that for East Asian students studying at Western universities, the ability to think critically has proved particularly challenging, given the differing character of their educational and cultural backgrounds (Ballard & Clanchy, 1991; Atikinson, 1997; Ellwood, 2000; Davies, 2013; Shaheen, 2016). Paton (2005, p. 1) has observed: ‘In an oft-heard expression of exasperation, academics in Australia claim that Chinese students do not partake naturally in critical thinking because of a perception of mere rote learning and the lack of overt participation in classroom discussions.’ Moore (2011, p. 12) adds that the ‘simple binary of critical and non-critical educational cultures persists as a powerful image in our universities.’

Many researchers have argued, however, that such judgements do not adequately take into account the impact on academic performance of language ability (Lun et al., 2010; Paton, 2011). When Asian students have been tested in their first language, in critical thinking as well as other more traditional disciplines, they tend to score highly (Floyd, 2011; OECD, 2014). This phenomenon has sometimes been referred to as the ‘Asian paradox’ (Biggs, 1996).

One weakness of these studies, however, is that they have tended to employ standardised critical thinking tests, which differ in fundamental respects from the tasks students are required to carry out at university. Candidates are presented with small discrete items, which test their ability to spot logical flaws, make inferences, draw conclusions, identify weak arguments and so on. While these are all important components of critical thinking, they do not require students to create a well-reasoned argument from scratch. In academic essays and debates, students must interpret a question, gather relevant information from primary and secondary sources, analyse and synthesise the information, and from there develop a strong and original argument. One can question, therefore, whether the tests employed in previous studies actually assess what educators are talking about when they discuss the lack of CT skills in East Asian students.

This study aims, therefore, to examine the impact of language on the kind of tasks international students are required to carry out in real-life university courses. Two classes of sixteen Japanese students at a private university in Tokyo were asked to prepare and perform a debate, one class in Japanese and the other class in English. The debate consisted of three speeches: a constructive speech, which required the kind of skills employed in constructing an academic essay; a cross-examination speech and a refutation speech, both of which reflect the demands of carrying out a class discussion. All the students taught in English possessed TOEFL scores sufficient to attend most universities in the U.S., the U.K., Australia, or New Zealand. Transcripts were made of the debates, with all Japanese work translated into English. Then three raters based at universities outside of Japan were asked to evaluate the debate transcripts using criteria based on the taxonomy of critical thinking drawn up by Facione (1990). They were not told the purpose of the study. The results of the study offer important insights into the impact of language on critical thinking, albeit from a small sample size.

In order to explain the rationale and background of the study, the paper will begin by outlining previous examinations of the critical thinking skills of Asian students, highlighting the inconclusiveness of many of these studies. From there, it will describe the study itself, explaining how the debates were carried out by the students and then evaluated by the raters. The paper will conclude with a discussion of the results and their significance for educators of international students in Western universities.

Critical thinking and Asian students

In describing her experience of teaching an ethnographic culture course to Japanese students at the University of Technology in Sydney, Ellwood (2000, p. 4) claimed that the students ‘fit the stereotypes of being passive and non-participatory, with little ability in the type of critical enquiry which is so valued by the western academy.’ Leaving aside the specific circumstances of the course and its students, Ellwood’s complaint is not untypical of educators working with East Asian students in English-language contexts. The argument is that the respectful, Confucian cultural values instilled in Asian societies and the exam-driven, teacher-centred nature of their education systems work against producing students with the kind of critical thinking skills required by Western higher education. Gieve (1998, p. 128) has said that inculcating Asian students into Western learning environments ‘may require a wholesale reorientation of students’ cultural norms, values, beliefs and attitudes.’

Considering the importance of this issue both for international Asian learners and the insitutions responsible for nurturing them, there has been surprisingly little empirical research into whether these claims are valid or not. Of the studies that have been made, the vast majority deal not with the critical thinking skills of Asian learners but with their dispositions and attitudes towards using them. These studies of CT dispositions have revealed a somewhat mixed picture. On the one hand, studies of preservice teachers in the USA and China by McBride et al. (2002) and of Hong Kong Chinese and Australian nursing students by Tiwari et al. (2003) found that the Chinese sample scored significantly lower on the California Critical Thinking Disposition Inventory scale than their Western counterparts, indicating that the Chinese students might be less motivated to use critical thinking. On the other hand, however, studies by Jones (2005), Paton (2011) and Manalo et al. (2013) found few or no differences between Asian and Western students in their learning dispositions. After interviewing Chinese students about critical thinking, Paton (2011, p. 36) concluded that ‘the depth and variety of thought shown in the students’ responses indicate a remarkable level of critical thinking, which would seem to belie the strident claims by those such as Atkinson (1997) that critical thinking is the preserve of Western culture’.

Comparisons of the critical thinking skills, rather than dispositions, of Asian and Western learners are few and far between. Some of them have been carried out in English or with significant selective bias, which has made it difficult to gain a true picture of CT abilities. A comparison between local students and international Asian students at a university in New Zealand by Lun et al. (2010) found that Asian students gained lower scores on the Halpern Critical Thinking Assessment using Everyday Situations (HCTAES), but they surmised that this was mainly a consequence of the test being carried out in the students’ second language. In a study by Hau et al. (2006), Chinese students in Hong Kong actually scored higher than American university students on the HCTAES, but the authors argued this was because the Hong Kong Chinese sample was recruited from a more selective institution than that of the American sample.

When Asian students have been tested in their first language, their results have often been superior to those of learners from other parts of the world. In the largest-scale test of comparative academic ability, conducted by the OECD, students from East Asia not only came out on top in the traditional subjects of maths, science and literacy, they also occupied the top four places in a newly-developed problem-solving test. The OECD described pupils who excelled in the test as ‘quick learners, highly inquisitive and able to solve unstructured problems in unfamiliar contexts’ (OECD, 2014, p. 44). Moreoever, a recent study conducted at Stanford University found that Chinese freshmen in computer science and engineering programmes had critical thinking skills, including the ability to identify assumptions, test hypotheses and draw relationships between variables, that were around two or three years ahead of their peers in the United States and Russia (Hernandez, 2016). Floyd (2011), meanwhile, tested Chinese speakers with the Watson Glaser Critical Thinking Appraisal and observed that scores were significantly higher when they did the test in their native language than in English.

So, does this mean that claims about Asian students lacking CT skills are false? Can the difficulties they reportedly face at Western universities be blamed purely on the disadvantages of studying in a second language? Previous studies may seem to suggest this is the case. However, there is a significant drawback to all of these studies. That is that the standardised tests they employ to evaluate critical thinking skills – such as PISA, Watson-Glaser, and HCTAES – do not adequately replicate the kind of academic tasks students must carry out in their studies at universities. The PISA test, for example, assesses pupils’ ability to devise strategies for tackling unfamiliar problems, from working out the quickest travel time across a city to dealing with a new digital device. These problems are often of a mathematical or statistical nature, involving the application of calculation techniques to real-life problems. The HCTAES test and the Watson Glaser test, meanwhile, present candidates with short, discrete items from which they must deduce logical inferences or evaluate on the basis of strength and soundness of argument. Candidates choose the most appropriate response from a set of multiple-choice options.

While the skills assessed by such tests are all components of critical thinking, as conceptualised by researchers such as Facione (1990) and Ennis (1987), they do not require candidates to create a well-reasoned argument from scratch. In most non-scientific academic fields, students are expected to compose long-form argumentative essays or participate in academic debates. They are required to research information independently from a variety of sources, synthesise that information logically and present it in an original and persuasive form. This is quite different from evaluating a short item of given information and choosing from a set of multiple-choice responses. Students in East Asia are well-practised in multiple-choice examinations of many kinds and it does not seem surprising, given the high standards of education in the region, that they generally score well in such tests.

When educators in Western universities discuss the lack of CT skills in international Asian students, they are usually referring to their ability to compose argumentative essays or participate in academic discussions. There is indeed evidence that Asian students gain far less practice at these tasks in school than their counterparts in the West due to the focus on fact-based examinations (Shaheen, 2016). Mulvey (2016), for example, reported that out of 300 students surveyed over six years in two universities in Japan, not a single student had written an argumentative essay in either Japanese or English at high school. Chinese education, too, tends to be teacher-centred with large class sizes and few opportunities for student discussion. Memorisation of known facts takes precedence over the composition of original arguments.

This study, then, seeks to shed light on two crucial questions related to the debate over Asian students and critical thinking in English-language contexts: (1) To what extent do Japanese university students display critical thinking skills in the composition of long-form arguments? (2) What effect does language have on student performance in such tasks? In the following sections, the design, implementation and results of the study will be discussed.

Methods

The study took place at a large private university in Tokyo. Two classes of sixteen first-year students were taught for one period a week in a one-semester course by the author, one class in Japanese and the other in English. The English class possessed TOEFL iBT scores ranging from 74 to 92, and most would go on to study abroad for at least one year before graduation. None of the students had had any prior experience with debates in either English or Japanese and received training during the course only in the proper format of a debate performance. For four weeks during their respective courses, the students of both classes were required to prepare and perform an academic debate based on the following theme: ‘Violent video games lead to violent behaviour.’ This theme was chosen because it would force the students to engage with various kinds of source material and data, both quantitative and qualitative, forcing them to distinguish between reliable and unreliable evidence, an important component of critical thinking. Although it was regrettable that students could not be given a choice of debate topics, it was considered important to limit the degree of variability between each group.

The students carried out the debates in groups of four, with two members speaking in favour of the proposition and two against. Despite the variability in TOEFL scores amongst the sixteen students of the English class, the groups were not segregated by proficiency. With such a small sample size, any attempt to generalise about the relationship between specific degree of language proficiency and performance in the debates would have been flawed. The aim of the study was more modest: to compare the impact that language choice as a whole had on the critical thinking skills of the students.

A slightly simplified version of the Lincoln-Douglas was chosen as the format for the debates, with six speeches in total as follows:

  1. 1)

    1) Affirmative constructive speech (6 min)

  2. 2)

    Cross-examination of Affirmative by Negative (2 min)

  3. 3)

    Negative constructive speech (6 min)

  4. 4)

    Cross-examination of Negative by Affirmative (2 min)

  5. 5)

    Affirmative rebuttal (2 min)

  6. 6)

    Negative rebuttal (2 min)

The students were given three weeks to prepare the debates to allow them sufficient time to collect and analyse relevant data. It was considered important to give the English class the same amount of preparation time as the Japanese class, despite the handicap of language, as this more closely mirrored the situation international students face when studying abroad. In total, there would be eight debates, four in English and four in Japanese, each one lasting for approximately twenty minutes. The debates were carried out within single ninety-minute class periods.

Evaluation was carried out by three independent raters teaching at three different Australian universities. The raters were experienced lecturers of liberal arts courses, who regularly engaged students in seminar discussions and used argumentative essays as their primary form of assessment. They were given transcripts of the eight debates, with the Japanese debates translated into English. The English debates were corrected for grammatical and lexical errors beforehand in an attempt to ameliorate (if not eliminate entirely) biases that might arise from imperfect English, while the Japanese speeches were back-translated from the English back into Japanese to help ensure the accuracy and reliability of the translations. The raters were not told the rationale or focus of the study.

The evaluation factors were informed by the commonly accepted taxonomies of critical thinking skills put forward by Ennis (1987), Facione (1990) and others. The taxonomy of Facione (1990) was considered most practical for use in this study (Table 1):

Table 1 Consensus list of critical thinking cognitive skills and sub-skills (Facione 1990)

This taxonomy was adapted to produce an evaluation framework for each of the three types of speeches produced in the debates. The evaluation factors were tested for inter-rater reliability during a pilot study until a final framework was established (Table 2):

Table 2 Evaluation factors for Japanese and English debates

Each factor was evaluated with a Likert scale for quality from 1 to 5 as follows: 1 = very poor, 2 = poor, 3 = acceptable, 4 = good, 5 = very good. The raters were also asked to provide written comments about each speech both for each group and each class as a whole. For the sake of brevity, only the whole class comments are recorded in the section below.

Results and discussion

Before the responses of the three raters were analysed for comparative purposes, they were subjected to inter-rater consistency analysis using the Krippendorff alpha statistic. The inter-rater reliability was found to be 0.73 for the Japanese debate with an average pairwise percentage agreement of 84.8% and 0.692 with an average pairwise percentage agreement of 78.9% for the English debates. No ratings differed from another by more than one point on the Likert scale.

The results themselves revealed significant differences between the ratings of the Japanese debate and those of the English debate. For the sake of clarity, each of the three types of speeches (constructive, cross-examination and rebuttal) will be examined in turn:

Constructive speeches

The aim of the constructive speech in a debate is to lay out within a logical and formal structure the major arguments of one’s case. While in official debate contests, contestants will be given little time to prepare their arguments, the students on this course had three weeks in which to consider and research their case. It was assumed that this extra time would allow them to base their arguments on solid, verifiable evidence taken from reputable sources, all key ingredients of what are considered to be critical thinking skills. Since the students had time to prepare their speeches, it was hypothesised that, of all the three types of speeches, this would be the one least affected by language deficit.

Having transcribed the sixteen speeches made by the students into written form (four affirmative and four negative in English and in Japanese), with the Japanese speeches translated into English, they were given to the three raters along with the rating rubric. The raters scored them as follows (Table 3):

Table 3 Mean ratings for constructive speeches of Japanese and English debates

The raters’ specific comments on the Japanese constructive speeches were as follows:

  • Rater A: On the whole, the constructive speeches displayed clear evidence of critical thinking. They possessed persuasive and logical argumentative frameworks which were supported by sufficient evidence of a generally trustworthy nature.

  • Rater B: While there were certain weaknesses in the constructive speeches, most notably the failure to adequately clarify the key terms of the debate, they were generally well-constructed with at least three major points backed up by reliable and data-based evidence.

  • Rater C: There were no significant weaknesses in the constructive speeches, though at times they could have employed a greater variety of source material. Their cases were explained coherently with a sophisticated level of expression and a clear demarkation between argument and support.

They commented on the English constructive speeches in the following way:

  • Rater A: While the speakers made an attempt to engage with the topic in a logical manner, their arguments lacked both sophistication and depth. There was an over-reliance on just one or two sources and a failure to argue their case persuasively.

  • Rater B: The speeches on the whole were rather poorly-constructed. They made an attempt to construct their arguments in a logical and coherent framework, but they failed to present sufficient evidence to support their points. At times, it was not clear what arguments they were trying to make.

  • Rater C: Although one or two of the speeches offered a persuasive argument, there was generally a lack of depth to the points made and the evidence brought out in support. It seemed the speakers had based their case on only a few sources, which contributed to the overall impression of under-preparation and superficiality.

Both the ratings and the comments reveal significant differences in the quality of the speeches in Japanese and English. While both sets of students attempted to organise their constructive speeches in a logical structure, those in Japanese possessed greater depth and persuasive strength with a wider range of strong, supportive evidence. This was reflected in the bibliography which the students were asked to provide at the end of their speeches. The Japanese speeches listed, on average, 5.5 separate sources consisting mainly of academic journal papers and serious newspaper articles written in Japanese. The English speeches, on the other hand, had only 3.3 sources on average. They included a mixture of Japanese and English material, but what was notable was that the sources of both languages included a number of references that would normally be considered unreliable in academic work, including blog posts and websites of unverifiable origin.

While the students were reminded of the importance of choosing trustworthy sources at the beginning of the course, it seemed that the effort of preparing and performing a debate in a second language affected their ability to judge the reliability of their source material. One possible reason for this is that the kind of sources considered acceptable in academic work tend to be longer and more complex than other forms of material available online. In a second language, it is both harder to assimilate complex material or, if the material is in the student’s first language, to translate or summarise it effectively into the second language. This would explain why students who carried out the debate in English tended to choose shorter and easier sources both in English and in Japanese.

One weakness the raters found in both the English and the Japanese speeches was the failure of the students to adequately clarify the significance and meaning of the debate’s key terms. This includes, for example, defining the terms ‘violent video games’ and ‘violent behaviour’ as well as the connotations of the verb ‘lead to’. This may be regarded as a weakness in critical thinking skills. The importance of clarifying terms was not made explicit to the students before the debate, and their general failure to do so may be a reflection of their lack of experience in this form of academic task.

Cross-examination speeches

Unlike the constructive speeches, the students had very little time to prepare the cross-examination speeches. A cross-examination speech requires the debater to understand and analyse the arguments made by the other side as they are being given, and from there to point out specific flaws and weaknesses. It is a challenging task even for a native speaker experienced in debate, and it was felt that, if the English language speakers did prove to be at a disadvantage, it would be particularly evident in the cross-examination speeches (Table 4).

Table 4 Mean ratings for cross-examination speeches of Japanese and English debates

Along with their numerical evaluations, the raters made comments on the Japanese speeches as follows:

  • Rater A: The speakers were able, on the whole, to pick out the main arguments of their opponents and find something to counter-argue about them. Although they did miss some counter-claims, their cross-examination speeches were logically constructed and clearly explained.

  • Rater B: The cross-examination speeches were not as sharp and well-supported as the constructive speeches, but since the students had no time to prepare for them, this is to be expected. I found the students picked out most (if not all) of the obvious counter-arguments and explained them with some clarity.

  • Rater C: As I was reading through the constructive speeches, I found myself searching for the kind of weaknesses I would point out in a cross-examination speech. The speakers impressed me, on several occasions, by finding exactly the same flaws I myself had, be they argumentative points that had natural counter-arguments or more specific weaknesses in the evidence the opposition had used.

The comments on the English cross-examination speeches went as follows:

  • Rater A: These speeches were largely disappointing. The speakers failed to show a clear understanding of their opponents’ arguments and, consequently, were unable to make convincing counter-arguments. Several of the speakers were scarcely able to carry out any cross-examination at all. Their speeches had little content other than a very basic summary of their opponents’ case along with weak unsupported statements, such as ‘We don’t agree’.

  • Rater B: The speakers were barely able to make what we could really call a cross-examination. When they did attempt to point out weaknesses in their opponents’ argument, there was a mechanical nature to their points e.g. ‘the evidence they presented was old’. While this may have been true to some extent, it did not show an engagement with the substance of the arguments.

  • Rater C: The speakers seemed to struggle to comprehend their opponents’ points. Even though the constructive speeches themselves were rather weak and should have been easy to counter-argue, there was very little attempt to truly cross-examine them. Some counter-arguments were extremely weak e.g. ‘We don’t agree with this idea.’

The evaluations of the raters largely confirmed the proposition that cross-examination speeches were more challenging for the two sets of students but particularly so for the students working in a second language. They seemed to have trouble assimilating the arguments given by the opposition in their constructive speeches and were, therefore, unable to produce any convincing counter-arguments. This was evident simply in the length of the speeches they were able to produce within the alloted time of two minutes. While the Japanese speeches contained an average of 312 words (when translated into English), the English speeches had just 122 words. Much of the allotted time was wasted with hesitations and pauses as the speakers struggled to compose a meaningful response.

As the raters mentioned, the counterpoints that were made in the English speeches tended to have little persuasive power. Three of the speeches contained statements such as ‘We don’t agree with this’ or ‘This argument is not strong’ without any clear explanation of the reason. Two others posed the questions ‘Is this true?’ and ‘Can we say this?’ but failed to provide any grounds on which to base them. This contrasted with the Japanese speeches in which, on the whole, the cross-examination was carried out on a systematic point-by-point basis in which the opponents’ arguments were briefly summarised and then questioned on a specific basis. This is not to say that the Japanese cross-examinations were without problems. At times, the speakers missed certain inconsistencies in their opponents’ constructive speeches which were noticed by the researcher and the raters. However, considering the students’ inexperience with debate, this is perhaps to be expected.

Refutation speeches

The aim of a refutation speech is to answer the doubts and questions raised by the opposition in the cross-examination speech and from there to re-state one’s own case in persuasive terms. As with the cross-examination speeches, the students had to compose the refutation speeches on the spot, and thus the English speakers were at a significant disadvantage compared to their Japanese counterparts. The three raters evaluated this last round of speeches as follows (Table 5):

Table 5 Mean ratings for refutation speeches of Japanese and English debates

The raters made comments on the Japanese speeches as follows:

  • Rater A: The refutations were the weakest of the three speeches. Although the speakers made an attempt to tackle the points made during the cross-examination, they did so only to a mediocre level. Only occasionally did they successfully refute their opponents’ points.

  • Rater B: The speakers seemed to lack a clear strategy for making these speeches. They tended to repeat their opponents’ cross-examination points without effectively providing counter-arguments, other than repeating their own original arguments.

  • Rater C: The speakers managed to re-state their own arguments with reasonable success, but they failed to substantially refute their opponents’ cross-examination. Without access to fresh evidence, they were often unable to find ways of answering their opponents’ points.

For the English speeches, the raters made the following comments:

  • Rater A: The speakers made little attempt to engage with their opponents’ cross-examination, though part of the reason for this may be that the cross-examination speeches themselves were unclear. Rather than refute, they mainly repeated the same arguments made in their constructive speeches.

  • Rater B: These speeches added little to the debate. The speakers simply repeated their main arguments again, though with less clarity and persuasive power.

  • Rater C: Since the cross-examination speeches were of low quality, it was not surprising that the refutation speeches would be too since it was not clear what points the speakers had to refute. The speeches mainly consisted of a repetition of the constructive speeches.

The refutation speeches proved to be the weakest of the three types of speeches in the debate, both in English and in Japanese. For the English speeches, the students were not able to compose anything that could truly be regarded as a refutation. As Raters A and C commented, this was partly due to the fact that the cross-examination speeches often failed to make any clear points that could be refuted. But, even taking that into consideration, the students made little attempt to engage with any of the cross-examination points, using the refutation speeches purely to summarise their constructive speeches. To some extent, this was also true of the Japanese speeches. In Japanese, the speakers did acknowledge their opponents’ arguments, but they were not often able to refute them persuasively. This may reflect the students’ lack of experience with debate and its conventions. With no time to prepare material for counter-arguments, debaters are forced to think on their feet, a task that requires practice as well as skill.

Where the Japanese speeches were superior to the English ones was in the length, clarity and coherence of their arguments. In the allotted two minutes, the Japanese speakers produced 346 words on average compared to 103 words for the English speakers. They organised their speeches into a point-by-point format, while the English speeches tended to be vague and hesitant in structure and content. It appeared as though the English speakers’ minds were so preoccupied with finding the appropriate words to say, there was little mental space available for a proper consideration of argument and counter-argument.

Conclusions

This paper has compared the performance of two classes of Japanese university students in an academic debate, with one class performing the debate in their native language and the other in English. The rationale for the study was that debate is a more accurate reflection of the kind of tasks international students face when they enter higher education in the West. The constructive speeches, which the students were given three weeks to prepare for, required them to seek out reliable sources, research relevant information and synthesise it into a clear logical argument. In terms of critical thinking (if not mode of discourse), it was similar, therefore, to the skills required for academic essay writing, a staple of most non-scientific disciplines at university. The cross-examination and refutation speeches, on the other hand, reflected the type of spontaneous thinking required for class discussions, in which students are forced to make and defend arguments before their teachers and peers. It was hypothesised that, given the absence of preparation time, the cross-examination and refutation speeches would be more adversely affected by language than the constructive speeches.

The study found that, despite the English proficiency levels of the students being equivalent to those required for entrance to Western universities, language proved to be a considerable handicap when it came to performance. In all three speeches of the debates, the English speakers were given significantly lower evaluations by the three raters than the Japanese speakers. While comments for the Japanese debates were generally positive in tone, acknowledging the students’ use of several aspects of critical thinking, those for the English debates pointed out serious weaknesses in argument, depth and explanation. As the following figure makes clear, all of the four groups in the Japanese class significantly outperformed those of the English class (Fig. 1).

Fig. 1
figure 1

Mean ratings for each type of speech by group

What, specifically, did language seem to have the most significant adverse effect upon? In terms of the constructive speeches, the students presenting in English made an attempt to compose a coherent case, but their arguments lacked depth and sophistication. They relied on fewer sources than the Japanese speakers, which led them to produce arguments that were not supported by convincing evidence. For instance, three of the four English groups presenting on the affirmative side of the debate made the argument that there have been real-life examples of violent video games leading to violent behaviour. However, they provided only one or two specific incidents as support and, furthermore, failed to interrogate these incidents sufficiently to demonstrate that video games were indeed a significant factor. The Japanese speakers making a similar point, on the other hand, presented statistics from a research study that detailed how many cases over a period of a decade were found to be linked to video games and included testimony from a psychologist in specific examples.

There was also a difference in the type of sources used by the two sets of students. Of the 44 references included by the eight groups presenting in Japanese, 37 were from what would be regarded as reliable sources, including non-fiction academic books (5), academic papers (19), serious newspaper articles (8) and online articles written by identifiable experts (5). All of the sources were written in Japanese. Of the 26 references listed by the English students, on the other hand, only 14 came from reliable sources. The non-reliable sources consisted of online articles of unknown or non-expert authorship. Significantly, while 11 of the 14 reliable sources were written in Japanese, 9 of the 12 unreliable sources were written in English. The majority of these 9 English sources were short in length, less than 800 words on average. The students were either unable to properly distinguish between reliable and unreliable information in English, or they were intimidated by the greater length and complexity of the more serious sources and, therefore, tempted to choose those that were simpler and shorter without an adequate consideration of their worth.

The cross-examination and rebuttal speeches were evaluated lower than the constructive speeches in both Japanese and English, reflecting the greater difficulty of composing arguments without adequate preparation time. Somewhat contrary to expectations, the evaluation gap between the Japanese and English speeches was generally similar for the two spontaneous speeches and the constructive speeches (though the rebuttal speech had the highest average gap of 1.89). Nevertheless, both the ratings and the raters’ comments illustrate the generally low quality of the spontaneous English speeches. While the raters praised the Japanese speeches for managing to pick out most, if not all, of the salient points in the cross-examinations and for at least attempting to refute their opponents’ cross-examination, they noted that the English speeches contained barely any attempt to explicate a clear and logical argument. With frequent hesitations as well as repetitions, they were less than half the length of the Japanese speeches and lacked both structure and content.

It seems that in carrying out the spontaneous speeches in particular, the students speaking in English may have suffered from what has been termed ‘cognitive overload’ (Paas et al., 2003). According to cognitive overload theory, the amount of information that can be stored and processed in the working memory is limited. Language processing requires the use of cognitive resources in working memory, as does the application of critical thinking skills. If a considerable amount of those resources are expended on utilising a foreign language, there may not be adequate resources remaining for th satisfactory execution of critical thinking (Cook, 1993; Koda, 2005; Campbell et al., 2007). Cognitive overload has been used as an explanation for lower cognitive performance in other studies involving a second language. Takano and Noda (1993), for example, observed that speakers of Japanese performed less well on a calculation task when they carried it out in English rather than in Japanese, while native speakers of English did less well when doing the task in Japanese. Manalo and Uesaka (2012) showed that students were less able to use diagrams when presenting information in a second language. In this study, it was found that for the cross-examination and rebuttal speeches in particular, the students simply did not have the mental capacity to cope with the demands of the task and the language at the same time, resulting in a significantly impaired performance.

Cognitive overload theory helps to explain why East Asian students seem to struggle to display adequate critical thinking skills during courses at Western institutions. This paper has shown, albeit with a very limited sample of students in one particular context, that Japanese students do have a capacity for critical thinking in their own language. While the purpose of the study was not to compare the skills of Asian and Western students, the debates conducted in Japanese were evaluated relatively highly by Western tertiary-level educators, who were purposefully kept unaware of the parameters of the study. This suggests that many of the problems faced by Asian students overseas may be attributable to the handicap of language. This does not mean, of course, that they do not need to be taught both the importance of critical thinking and how it can be put into practice in their academic work. However, it does suggest that we ought to be wary of making sweeping judgements about Asian students and their supposed incapacity for critical thought. Above all, we should be sensitive to the significant challenges posed by carrying out linguistically-demanding tasks, such as essay writing, debate and discussion, in a second language.

References

  • Atkinson, D. (1997). A critical approach to critical thinking in TESOL. TESOL Quarterly, 31(1), 71–94.

    Article  Google Scholar 

  • Ballard, B., & Clanchy, J. (1991). Teaching students from overseas: a brief guide for lecturers and supervisors. Melbourne: Longman Cheshire.

    Google Scholar 

  • Biggs, J. (1996). Western misconception of the Confucian-heritage learning culture. In D. Watkins & J. Biggs (Eds.), The Chinese learner: culture, psychological, and contextual influences (pp. 45–67). Hong Kong: Comparative Education Research Centre & The Australian Council for Educational Research.

    Google Scholar 

  • Campbell, A., Adams, V., & Davis, G. (2007). Cognitive demands and second-language learners: A framework for analyzing mathematics instructional contexts. Mathematical Thinking and Learning, 9(1), 3–30.

    Article  Google Scholar 

  • Cook, V. (1993). Linguistics and second language acquisition. London: Macmillan.

    Book  Google Scholar 

  • Davies, M. (2013). Critical thinking and the disciplines reconsidered. Higher Education Research & Development, 32, 529–544.

    Article  Google Scholar 

  • Ellwood, C. (2000). Dissolving and resolving cultural expectations: Socio-cultural approaches to program development for international students, presented at the National Language and Academic Skills Conference, La Trobe University, Melbourne, 2000.

    Google Scholar 

  • Ennis, R. (1987). A taxonomy of critical thinking dispositions and abilities. In J. Baron & R. J. Sternberg (Eds.), Teaching thinking skills: theory and practice (pp. 9–26). New York, NY: W. H. Freeman and Company.

    Google Scholar 

  • Facione, P. (1990). Critical thinking: a statement of expert consensus – The Delphi Report. California: California Academic Press.

    Google Scholar 

  • Floyd, C. (2011). Critical thinking in a second language. Higher Education Research and Development, 30, 289–302.

    Article  Google Scholar 

  • Gieve, S. (1998). Comments on Dwight Atkinson’s ‘A critical approach to critical thinking in TESOL. TESOL Quarterly, 32(1), 123–129.

    Article  Google Scholar 

  • Hau, K. T., Halpern, D., Marin-Burkhart, L., Ho, I. T., Ku, K. Y. L., & Chan, N. M. (2006). Chinese and United States Students’ critical thinking: Cross-cultural validation of a critical thinking assessment, presented at American Educational Research Association Annual Meeting, San Francisco, 2006. Retrieved from http://commons.ln.edu.hk/sw_master/3348/.

  • Hernandez, J. (2016). Study finds Chinese students excel at critical thinking Until college. New York Times, (30 July) Retrieved from https://www.nytimes.com/2016/07/31/world/asia/china-college-education-quality.html?mcubz=3.

  • Jones, A. (2005). Culture and context: Critical thinking and student learning in introductory macroeconomics. Studies in Higher Education, 30(3), 339–354.

    Article  Google Scholar 

  • Koda, K. (2005). Insights into second language reading. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Lun, V., Fischer, R., & Ward, C. (2010). Exploring cultural differences in critical thinking: Is it about my thinking style or is it the language I speak? Learning and Individual Differences, 20, 604–616.

    Article  Google Scholar 

  • Manalo, E., & Uesaka, Y. (2012). Elucidating the mechanism of spontaneous diagram use in explanations: How cognitive processing of text and diagrammatic representations is influenced by individual and task-related factors. Lecture Notes in Artificial Intelligence, 7352, 35–50.

    Google Scholar 

  • Manalo, E., Kusumi, T., Koyasu, M., Michita, Y., & Tanaka, Y. (2013). To what extent do culture-related factors influence university students’ critical thinking use? Thinking Skills and Creativity, 10, 121–132.

    Article  Google Scholar 

  • McBride, R., Xiang, P., Wittenburg, D., & Shen, J. (2002). An analysis of preservice teachers’ dispositions toward critical thinking: A cross-cultural perspective. Asia-Pacific Journal of Teacher Education, 30(2), 131–140.

    Article  Google Scholar 

  • Moore, T. (2011). Critical thinking and language: the challenge of generic skills and disciplinary discourse. London: Bloomsbury.

    Google Scholar 

  • Mulvey, B. (2016). Writing instruction: What is being taught in Japanese high schools, why, and why it matters. The Language Teacher, 40(3), 3–8.

    Google Scholar 

  • OECD. (2014). PISA 2012 results: creative problem solving. Retrieved from https://www.oecd.org/pisa/keyfindings/PISA-2012-results-volume-V.pdf.

  • Paas, F., Renkl, A., & Sweller, J. (2003). Cognitive load theory and instructional design: Recent developments. Educational Psychologist, 38, 63–71.

    Article  Google Scholar 

  • Paton, M. (2005). Is critical analysis foreign to Chinese students? In E. Manalo & G. Wong-Toi (Eds.), Communication skills in university education: the international dimension (pp. 1–11). Auckland, New Zealand: Pearson Education New Zealand.

    Google Scholar 

  • Paton, M. (2011). Asian students, critical thinking and English as an academic lingua franca. Analytic Teaching and Philosophical Praxis, 32(1), 27–39.

    Google Scholar 

  • Shaheen, N. (2016). International students’ critical thinking-related problem areas: UK university teachers’ perspectives. Journal of Research in International Education, 15(1), 18–31.

    Article  Google Scholar 

  • Takano, Y., & Noda, A. (1993). A temporary decline of thinking ability during foreign language processing. Journal of Cross-Cultural Psychology, 24(4), 445–462.

    Article  Google Scholar 

  • Tiwari, A., Avery, A., & Lai, P. (2003). Critical thinking disposition of Hong Kong Chinese and Australian nursing students. Journal of Advanced Nursing, 44(3), 298–307.

    Article  Google Scholar 

Download references

Funding

No funding was received for the study.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Rear.

Ethics declarations

Author’s information

David Rear is an associate professor at the Faculty of Science and Technology at Chuo University, Tokyo. He teaches courses on critical thinking, intercultural awareness and global studies and conducts research in critical thinking and critical discourse analysis. His most recent publication is a chapter on the teaching of critical thinking in Essential Competencies for English Medium University Teaching edited by Ruth Breeze and Carmen Guinda, published by Springer. He has also recently been published in Critical Policy Studies, Asia Pacific Journal of Education, Contemporary Japan and Asian Business & Management. In addition, he has published several critical reading textbooks for university students in Japan. Some of his publications may be viewed at his academia.edu profile: https://nihon-u.academia.edu/DaveRear

Competing interests

The author declares that he has no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rear, D. The language deficit: a comparison of the critical thinking skills of Asian students in first and second language contexts. Asian. J. Second. Foreign. Lang. Educ. 2, 13 (2017). https://doi.org/10.1186/s40862-017-0038-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40862-017-0038-7

Keywords