Exploring the Relationship Between Critical Thinking and Active Participation in Online Discussions and Essays Graham J. PASSMORE, Ellen Carter, & Tom O NEILL Lakehead University, Brock University, Brock University Abstract This paper explores the link between critical thinking and active participation in an online course and essays. Weekly discussion topics were set up in a first year university Child and Youth Studies course. Students were categorized into active and non-active participants depending on the number of contributions they made to the online discussions. Toward the end of the course, each student wrote an essay about one of the discussion topics. The nature and quality of critical thinking in the discussions and the essays of active and non-active course participants was compared using the Newman Webb and Cochrane (1995) content analysis technique. While active and non-active course participants exhibited the same general pattern of critical thinking ratios (for individual indicators of critical thinking) in the discussions and their essays, the essays of active participants consistently evidenced greater depth of critical thought. The finding holds implication for large online courses where students are placed into groups at random. 網 上 討 論 及 撰 文 中 批 判 思 考 與 活 躍 參 與 的 關 係 本 文 探 討 一 個 網 上 課 程 和 論 文 中 批 判 思 考 和 活 躍 參 與 之 間 的 關 係 在 大 學 一 年 級 兒 童 及 青 年 研 究 的 課 程 中, 負 責 人 擬 定 了 每 週 的 討 論 課 題 根 據 在 網 上 討 論 的 撰 文 數 目, 學 生 被 分 為 活 躍 參 與 者 或 非 活 躍 參 與 者 在 課 程 的 最 後 環 節, 每 一 位 學 生 會 就 其 中 一 個 討 論 課 題 撰 寫 一 篇 文 章 筆 者 運 用 Webb 和 Cochrane (1995) 的 內 容 分 析 技 術, 比 較 各 學 員 在 討 論 和 文 章 中 的 批 判 思 考 的 特 性 和 質 素 一 方 面, 課 程 活 躍 和 非 活 躍 學 員 在 討 論 和 文 章 中, 同 樣 展 示 了 批 判 思 考 比 率 的 一 般 樣 式 ( 批 判 思 考 的 個 人 指 標 ); 另 一 方 面, 活 躍 參 與 者 的 文 章 一 直 表 現 出 較 有 深 度 的 批 判 思 考 在 一 些 把 學 生 隨 機 分 組 的 課 程 來 說, 本 研 究 的 發 現 對 大 型 的 網 上 課 程 有 一 定 的 啟 示 Introduction In large, online classes students are often placed into discussion groups at random. Further they are often required to write essays about the topics that they have discussed. This study was inspired by the researchers interest in the effect that randomly assigning 1
Critical thinking 2 students into online discussion groups might have upon their subsequent ability to write essays. In particular we were interested in the exploring the notion that students who are placed in weak discussion groups might be disadvantaged when it comes time to write an essay? The study ran in a first year university-level Child and Youth Studies course whose structure was similar to that of other online courses. At the start of each of the weeks that the course ran a discussion topic was introduced into WebCT s asynchronous discussion tool. The discussion topics changed to complement and reflect the lecture component of the course. Toward the end of the course, each student used contributions to the discussions that they and their peers had made to form a framework for an essay about one of the discussion topics. Students were divided into active and non-active participants based upon the number of times they submitted a text-based contribution to the asynchronous discussion. Students who contributed more than the average number of submissions (19.95) were considered active; those that contributed less received a non-active ranking. The nature and quality of critical thinking in the contributions that active and non-active participants made to the discussions and presented in their essays was compared using the Newman Webb and Cochrane (1995) content analysis technique. Critical thinking (for individual Newman et. al. indicators) did not vary across the discussions and the essays, however, the depth of critical thinking exhibited in the essays of active participants was consistently higher than that of the non-active participants. Our interpretation of these findings is that active participation in an online discussion serves as a valuable arena for essay preparation and that the common practice of placing students into groups at random may disadvantage those (potentially active) students who find themselves in inactive discussion groups. Teachers and administrators in the K-University system can take steps to protect students by setting up and moderating online courses to encourage active participation and thereby making students aware that active participation can better prepare them for essay writing. There is evidence in the literature to support the notion that online discussion provides opportunities for students to test their thinking prior to the production of an essay. Lea (2001) for example, submitted the notion that on-line discussion serves as a venue for the exploration of ideas. Newman, Johnson, Cochrane and Webb (1996), found that in comparison to face-to-face discussion, students taking part in computer conferences brought in relevant outside material more often and were better at linking together ideas and solutions. Garrison, Anderson and Archer (2000) found that contributions to discussions were concerned with the exploration rather than the resolution of ideas. Background: The development of the Newman Webb and Cochrane content analysis method Newman, Webb and Cochrane (1995) viewed critical thinking as a link between group learning and deep thinking. They thought that it supports group problem solving processes that precede deep and meaningful learning. With this thinking in mind they developed a content analysis method to assess expressions of critical thinking in transcripts of discussions in computer mediated communication (CMC) sessions. A description of the Newman et al. method follows. It rates comments that CMC participants submit to a discussion against a series of indicators of critical thinking. 2
Descriptions of critical thinking, deep and surface learning, and group learning precede the description of the method. These descriptions provide the reader with an understanding of the theories that inspired the development of the Newman et. al. content assessment technique. They also provide understanding of the constructs that the method assesses. Critical thinking Garrison, Anderson and Archer (2000) provide a description of critical thinking that is suited to our needs for, like the Newman et al. method it is influenced by, and builds upon the Garrison (1991) model of critical thinking. Like others (Brookfield, 1987; Garrison, 1991), Garrison et al. see critical thinking as a process that involves 5 phases. The first phase, a triggering event, is followed by, perception, deliberation, conception and warranted action. During the problem solving process, the nature of the problem at hand and the learner s ability to move toward problem resolution will influence how much time is devoted to each phase. Deliberation for example may last for several minutes and perception might take up a second or less. Garrison, Anderson and Archer (2000) also have it that critical thinking is a multiphasic, holistic phenomenon that can be considered from a variety of perspectives. Garrison (1991) makes note of an inductive-deductive thinking perspective, an abstractness-concreteness perspective, a problem solving perspective and a creative thinking perspective. The latter 2 perspectives were the focus of the Garrison model of critical thinking and they are the perspectives that the indicators of the Newman et al. model account for. Garrison, Anderson and Archer (2000) hold that expressions of critical thinking in the online learning world result from a synergistic information exchange between the personal (reflective and internal) and shared (online information exchange) worlds of the student. That is, online discussion augments and facilitates the internal processes of deliberation and conception such that warranted action is made possible. Warranted action might be the creation of additional contributions that take the discussion closer to problem resolution such that comprehension and/or problem resolution is made possible for the contributor and others. Comprehension resulting from online discussion would have a positive influence on expressions of critical thinking in students essays. Deep learning and surface learning Deep learning is, for many, a desired outcome of an educational experience (Garrison, Anderson and Archer, 2001). According to Newman, Webb and Cochrane (1995), it is promoted by active learner participation. Newman et al. also note that Biggs (1987) associated deep learning with the 'affective involvement or level of commitment of the student and maintained that it is supported by interaction (with the material to be learned, and amongst the students themselves). In an online learning environment, interaction is associated with discussion about the material that is to be learned. Thus, deep learning requires affective involvement and interaction with the material under consideration. In contrast to deep learning, surface learning does not require a critical understanding of the material at hand or interaction with it. According to Newman et al., surface learning is promoted by learning strategies such as skimming, memorizing, and regurgitating for tests. Deep learning is preferred over surface learning for it provides for 3
Critical thinking 4 a level of comprehension that enables more effective knowledge use. The Newman, Webb and Cochrane (1995) content assessment method comprises a series of paired indicators. Each pair of indicators accounts for deep and surface learning in regard to a particular aspect of critical thinking. For example, the indicator pair R+/R-, refers to the relevance of statements submitted to the online environment. Statements that are accorded an R+ rating contain relevant information while R- or irrelevant statements do not add substantively to the discussion. Similarly, the I+/Iindicator pair reflect the importance or unimportance of information contained in discussion statements. I+ statements contain important information while I- statements do not. Group Learning Newman, Webb and Cochrane (1995) argue that there is a clear link between social interaction, critical thinking and deep learning. The link involves reciprocal supportive relationships wherein the components feed upon and are fed by each other. Thus the depth of learning that results from an online discussion is dependent upon the extent of the social interaction and the degree to which critical thinking is expressed. Another factor to be considered in group learning is affective involvement. Newman et al. make note that Biggs (1987) associated deep learning approaches with 'affective involvement, (Newman Webb and Cochrane, 1995, pp 60) which is supported by interaction. Integrating the above information provides a picture of the Newman et al. beliefs about group learning, critical thinking and deep learning. They see group problem-solving processes as being dependent on a loop in which students affective involvement fuels social interaction which is used to provide positive feedback to stimulate critical thinking. In turn, critical thinking leads to the critical understanding that is required for deep learning. The Newman, Webb and Cochrane Content Analysis Method According to Garrison, Anderson and Archer (2000), the development of the Newman, Webb and Cochrane (1995) content analysis technique was influenced by Garrison (1991). Garrison, Anderson and Archer (2000) acknowledge the contribution that the Newman et al. technique makes to content analysis and presumably, Newman et al. influenced the development of their content analysis technique in some way (even if only at a subconscious level). That being the case, an explanation is required at this juncture to justify our decision to use the Newman, Webb and Cochrane (1995) technique over the Garrison, Anderson and Archer (2000) method. Newman, Webb and Cochrane (1995) targeted sub-message level components of text for analysis. That is, they reviewed individual messages a number of times to find evidence of multiple indicators of their scale. For example a sentence or part of a sentence might be rated as providing evidence of the indicator Relevance in a student s thinking while another sentence in the same message might provide evidence of Importance. Garrison, Anderson and Archer (2000), note that sub-message level analysis is more arduous and time consuming than message-level assessment (multiple reviews of a message are required as opposed to a single review). They also note that it is more specific. It is the specificity of the Newman et al. technique that makes it the method of choice for this study. To explain, when Garrison, Anderson and Archer (2000) 4
applied their method to text messages they found that 41% coded at the same low level (exploratory) and only a few messages coded at higher levels of critical thinking. Garrison et al. submitted 2 explanations for this finding, first that online writing is not suited to high level thinking and second that high-level thinking might only occur in the online world when a particular research design is adopted. While we subscribe to the second interpretation we could not be sure that our study would elicit a range of levels of critical thinking. Moreover, we could not be sure that levels of critical thinking would differ from online discussion to essay. That being the case, we felt that we needed a technique that can assess different types of critical thinking rather than different levels. The Newman, Webb and Cochrane (1995) content analysis method was chosen for its sensitivity to different types of critical thinking. A complete description of the Newman Webb and Cochrane (1995) content analysis can now be provided. It was designed to assess the quality of learning (expressions of critical thinking) in online group learning environments. The research and theory of a number of workers formed a basis for its development: Anderson (1993), Biggs (1987), Brookfield (1987), Entwhistle and Ramsden (1983), Garrison (1991), and Henri (1991). A complete description of these influential works is beyond the scope of this paper. Suffice to say that they provided Newman, Webb and Cochrane (1995) with information about critical thinking, critical reasoning, group learning and deep learning that enabled them to identify potential indicators of critical thinking in a group learning environment. Examples of the indicators that Newman et al. identified are presented in Table 1 (the remaining indicators are listed in the results section). When rating a contribution to an online discussion against the indicators in Table 1, a rater seeks to determine whether it contains relevant or irrelevant statements and whether it presents important or unimportant points and issues. If the contribution is deemed to be both relevant and important it is awarded an R+ I+ rating. If it contains unimportant and irrelevant information, it receives an R- I rating. Newman et al. restricted the rating process to text-based submissions to the asynchronous discussion that contained obvious examples of the indicators that they identified. By ignoring content they could not obviously ascribe to an indicator they reduced the potential for rating contributions inaccurately. Table 1. The Newman, Webb and Cochrane (1995) Indicators of Critical Thinking Indicator Components Importance Relevance I+ Important points/issues I- Unimportant points/issues R+ Relevant statements R- Irrelevant statements/diversions 5
Critical thinking 6 Newman et al. used the formula below to identify a depth of critical thinking X+ -X- / X+ + X- ratio for each of their indicator pairs. The X+ in the numerator and denominator of the formula refers to the number of positive ratings for an indicator pair in an entire discussion. The X- refers to the number of negative ratings. A simple example best explains the application of the formula. In a discussion that contains 8 positive ratings for Importance and 4 ratings for Unimportance the critical thinking ratio for the (I+/I-) Importance indicator would be: = 8 4/ 8+ 4 = 4/12 = 0.33 The above formula was applied to the discussion contributions and the essays of the students in the Child and Youth Studies course of this study. Critical thinking ratios were calculated for each of the indicator pairs that Newman et al. identified. To obtain a single, comprehensive rating for critical thinking for the discussions and the essays an adaptation of the formula was applied. The adaptation required the summation of critical thinking ratios for each of the indicator pairs in a discussion or an essay. The sum was then divided by the number of different types of indicator pairs in the discussion or essay. For example, if Importance and Relevance were the 2 indicator pairs identified in a discussion and their critical thinking ratios were 0.33 and 0.25 respectively, the combined or total critical thinking ratio would be = 0.33 + 0.25/2 = 0.29 Limitations of the Method It is important to note that while the Newman et al. content assessment method contains indicators that pertain to the problem solving and creative thinking perspectives of critical thinking, it is limited by the extent to which its indicators truly encompass the nature of these perspectives. By extension it is also limited by the extent to which these perspectives capture the nature of critical thinking. Designing valid indicators is not an easy task. Garrison, Anderson and Archer (2000) hold that they need to be sufficiently generalized that raters can be trained to use them quickly and effectively. They also contend that they need to be sufficiently specific that they measure something of value. Compounded in the above dilemmas is an interrater reliability issue that arises whenever raters use transcripts of text to try to determine the thoughts and intentions of the students who created them. What is relevant and important information to one rater might not be relevant and important to another. However, according to Newman, Webb and Cochrane (1995) obtaining a true measure of interrater reliability may not be possible or necessary: This content analysis technique works well for the transcripts we have tried it on. But we did find some practical problems. Since some of the indicators such as OC+, OM+, I+, R+ rely on subject knowledge, the scripts need to be marked by someone with such 6
knowledge: ideally the class tutor. So getting multiple evaluators to control for subjective scoring is not usually possible. Instead, just as in the expert system, one relies on the experience and expertise of the tutor. (p. 72) Given the requirement for subject knowledge in an interrater reliability assessment, we relied instead upon the expertise of the second author to reliably score the students contributions to discussions and essays. Newman, Webb and Cochrane (1995) add that at present, content analysis is long winded. it takes too long for this to be used outside research projects. (p.73). The second author attests to the long winded nature of the analytical process. She notes too that the time expectations are such that it would be unreasonable to expect others to assess all of the discussions and essays to get a true measure of interrater reliability. Adding to the above, the question of whether to report all of the individual paired indicators or overall critical thinking scores presented an issue that was insoluble at the time of writing. When considered in conjunction, the above issues lead to our decision to avoid an interrater reliability assessment. Research Design When this study began there were 484 registered students in the Child and Youth Studies course (which was taught by the 3rd author). Students were provided with lecture and seminar-type instruction. The lectures were scheduled twice weekly and they lasted approximately 45 minutes each (90 minutes of lecture per week). PowerPoint slide shows that included photographs, charts and video clips were used to augment the presentation of lecture materials. Some (but not all) of the lecture material was made available to the students online in WebCT. The seminars ran once a week for 45 minutes. Their purpose was to demonstrate to students how they might engage in deep learning in the online discussions. Each week that the course ran, a different topic was added to the discussion area of WebCT. The topics included; bullying, attachment, parenting a child with special needs, mainstreaming, play, friendship, child abuse, eating disorders, gender identity, and resilience. Each discussion reflected the lecture material of that week of the course. The parenting and mainstreaming topics were combined into a single discussion topic as were play and friendship. This reduced the number of topics and allowed for a 3-week discussion period for each of them. The fall semester was set aside for students to gain familiarity with WebCT and to acquire expertise in using online discussions so that they could engage the course material in a critical reflective manner. At the start of the winter semester, students were placed in discussion groups at random. There were 6 or 7 students in each group. Preferences were set in WebCT so that the discussion areas for each group were discrete and separate from one another. The WebCT moderator (the second author) began each week s discussion by challenging the students to address an issue related to the lecture topic for that week. Students were encouraged to not only respond to the moderator s queries and challenges, but to also respond to the contributions made by the other students. In this way, a discussion was generated that allowed for deep exploration of the topics ( problems ), to engage critical thinking and to provide opportunities to refine arguments. In the last 2 weeks of the course the students selected one of the discussion topics as the subject of a final essay. They were told to use their groups contributions to the discussion to form a framework for their essay. 7
Critical thinking 8 Results An examination of students discussion contributions and their essays was undertaken with the Newman Web and Cochrane content analysis method by the second author. The examination considered the question of how the quality of critical thinking in discussions compares to the quality of critical thinking in essays. Table 2 presents the means and standard deviations for participation and critical thinking ratios in the discussions and essays. The mean number of contributions to the online discussions (participation) was 19.95. The mean number of contributions was used to differentiate between active and non-active participants. A contribution rate of 20 or more contributions (more than the mean) was considered active and fewer than 20 (less than the mean) was considered non-active. The application of the Newman et al. rubric yielded a mean critical thinking ratio (CTR) of.56 for the online discussions and.60 for the essays. Table 2 also presents the means and standard deviations for each measure by groups (active participants; and non-active participants). The mean number of contributions to the online discussions (participation) was 28.6 for the active participants and 11.2 for the non-active participants. Application of the Newman et al. rubric yielded a mean critical thinking ratio (CTR) for the online discussions of.58 for the active participants and.34 for the non-active participants. The mean critical thinking ratio (CTR) for the essays of the active participants was.75 and.44 for the non-active participants. Table 2 Means and Standard Deviations for Predictor and Outcome Variables (By Groups (Active and Non-Active Participants) Variable N Mean SD Participation 64 19.95 9.6 Participation (Active) 32 28.6 3.3 Participation (Non-Active) 32 11.2 4.5 CTR for discussions 64.56.27 CTR for discussions (Active) 32.58.15 CTR for discussions (Non-Active) 32.54.30 CTR for essays 64.60.23 CTR for essays (Active) 32.75.09 CTR for essays (Non-Active) 32.44.22 Active = 20 or more contributions overall to the online discussions Non-Active = less than 20 contributions overall to the online discussions Figure 1 presents the critical thinking ratios for each of the indicators of the Newman et al. content analysis method. The format of the figure was taken from Newman Webb and Cochrane (1995). Critical thinking ratios for each indicator pair are plotted along axes of 1 to 1+ which run from the centre of the figure to the outside. The 8
R axis represents the relevance indicator pair. The remaining axes represent the following indicator pairs: I= Importance, N= New Information, O= Outside Knowledge, L= Linking Ideas, J= Justification, C= Critical Assessment, A= Ambiguity, W= Widening of Ideas, and P=Practical Utility. To interpret Figure 1, consider that the closer an indicator pair s critical thinking ratio is to 1 or +1 the closer to the centre or the outside of the figure it crosses its axis. For example, the critical thinking ratio for active participants for the discussions for relevance was 0.93 and for non-active participants it was 0.82. In consequence, the plot for active participants crosses the R axis closer to the edge of the figure than the plot for non-active participants. Figure 2 presents the critical thinking ratios for each of the indicators (relevance, Importance, New Information, Outside Knowledge, Linking Ideas, Justification, Critical Assessment, Ambiguity, Widening of Ideas, and Practical Utility) for the essays of the active and non-active participants. Table 3 arrays the correlations among the variables. The correlation between participation and critical thinking ratio for the online discussions was significant (.36), as was the correlation between participation and critical thinking ratio for the essays (.67). Table 4 presents a bivariate linear regression analysis that was conducted with participation as the predictor and critical thinking ratio (CTR) for the essays as the outcome. Results of this analysis indicated that participation in the online discussions accounted for 44.4 % of the variability in the critical thinking ratios for the essays. 9
Critical thinking 10 Figure 1. Critical thinking ratios for the discussions (active and non-active participants) Figure 2. Critical thinking ratios for the essays (active and non-active participants) Table 3: Correlations Between All Variables 10
Participation CTR Discussions CTR Essays Participation -.36**.67** CTR Discussions.36** -.61** CTR Essays.67**.61** - ** Correlation is significant at the 0.01 level (2-tailed) p<.01 Table 4: Regression Analysis Predicting Critical Thinking Ratios for Essays from Participation Predictor Coefficient Beta T p Participation.28.67 7.04.000 N = 64. R squared =.44 (F1,62 = 49.6, p=.000) Discussion Three findings are worthy of note in the above. Contributions made to online discussions by active participants exceeded the contributions of non-active participants in quality as well as quantity. The difference between the quality of critical thinking of active and non-active participants was more apparent in the essays than the discussions. The quality of critical thinking that students expressed in essays could be predicted to a significant degree from active participation in the on-line discussions. The findings hold implication for K-12 and University level instructors. In the K- 12 arena instructors should consider setting up and structuring online courses so that active participation is possible, encouraged and modeled for all students. Making students aware of the benefits of active participation in online discussion will benefit them when they enter university for they will be better prepared for study at that level. The 11
Critical thinking 12 implication for the university arena is that the practice of placing students into groups at random may disadvantage those students who are placed in inactive discussion groups. Such placements may not provide sufficient opportunities to engage in the level of critical thinking that essay preparation requires. The situation is particular unfair to hard working students for it deprives them of the opportunity to benefit from the contributions of others. It is reasonable to propose that the greater quantities of critical thinking expressed by active participants in this study resulted in a degree of deep learning that exceeded that of non-active participants. This would explain why active participants exhibited significantly higher levels of critical thinking in their essays. It would not explain why there was only a marginal difference between the critical thinking of the 2 types of participant in the online discussions. An explanation for the latter point lies in the thinking that online discussion represents a writing genre where students explore ideas using critical thinking skills while essays represent a writing mode in which ideas developed in the discussion can be brought together in a more complete display of critical thinking. It was noted in the above that Lea (2001) submitted the idea that online discussion is an arena that supports the exploration of ideas. It was also noted that Garrison, Anderson and Archer (2000) found that online discussion promotes the exploration of ideas rather than their resolution. When statements explore ideas, the possibility of generating a negative score is likely to be higher when exploring ideas than it is when writing about thoughts that have been resolved. In providing a greater number of submissions to the discussions then, active participants are more likely than non-active participants to generate negative scores that 12
lower their critical thinking ratios. Essays contain text that results from practiced consideration. In a typical 20-page essay, active and non-active participants provide a similar number of statements. In this situation, the understanding that active participants developed from exploring ideas in the discussions results in higher critical thinking ratios. It should be noted that while participation in the online discussion was a significant predictor of critical thinking in the essays, reasons for the variance in critical thinking exhibited by the active and non-active participants undoubtedly extend beyond participation. The commitment level or affective involvement of the participants would be an obvious choice for a second predictor variable given the Newman, Webb and Cochrane (1995) belief that it fuels social interaction. Failure to assess the participants affective involvement is a weakness of the design of this study. However, if it is accepted that affect is confounded within participation then a predictive value of 44% for participation would suggest that, to some extent, the measurement of commitment levels was unnecessary. The students who participated the most were the students who were the most committed to learning and to the course. Moving beyond consideration of the relationship between the depth of critical thinking in discussions and essays, it is interesting to note that with the exception of importance and ambiguity, active and non-active participants exhibited similar patterns of critical thinking in their essays. Higher levels of importance and ambiguity in the essays of active participants suggest that a certain level of active (and affective) involvement in the discussions is required if students are to identify the important concepts of an essay topic and generate a written product that includes consideration of ambiguous ideas such that imaginations are stretched and deep learning occurs. In other 13
Critical thinking 14 words, active participation in discussions seems to be an essential prerequisite for the manifestation of deep learning in written essays. The ambiguity indicator is worthy of additional consideration for its critical thinking ratios for both active and non-active participants were the lowest in this study. Perhaps students felt uncertain about the propriety of introducing ideas to the discussion that they could not fully explain. Perhaps too, the time restrictions of the semester system prohibited the consideration of ambiguities in the Child and Youth Studies course. It is also possible that the discussions were set up in ways that did not support the consideration of ambiguous ideas. That is, the design of the educational experience (teaching presence) did not support the submission of ambiguous statements. Online discussions might be set up to accommodate the presentation and consideration of ambiguities. To explain, the first 2 weeks of a discussion might be reserved for students to present and explore ideas that come to them intuitively. During the 3 rd week, the moderator might introduce one or more ambiguous notions that have multiple paths to solution and multiple correct answers. While the students consider these ideas the moderator might encourage them to introduce additional ambiguous concepts. These ideas might be further considered for an additional set number of weeks. Whatever the reason for the low ambiguity scores in the discussions of this study, it seems logical that ambiguity is a variable that can best be attended to in essay writing. Unlike discussions, essay writing extends unlimited time to the expression of ideas. That said, the more that ambiguity can be considered in the discussion the greater the potential that it can be resolved in an essay. The more that ambiguity is considered and resolved in essays the greater the potential that deep learning will occur. 14
References Anderson T. (1993). Socially shared cognition in distance learning: An exploration of learning in an audio teleconferencing context. Unpublished doctoral dissertation proposal, University of Calgary. Biggs, J. B. (1987). Student approaches to studying and learning. Melbourne: Australian Council for Educational Research. Brookfield, S. D. (1987). Developing critical thinkers. San Francisco: Jossey-Bass. Entwhistle. N., & Ramsden P. (1983). Understanding student learning. London: Croom Helm. Garrison, D. R. (1991). Critical thinking and adult education: A conceptual model for developing critical thinking in adult learners. International Journal of Lifelong Education, 10 (4), 287-303. Garrison D.R., Anderson, T. & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. Internet and Higher Education 11(2), 1-14. Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking and computer conferencing: A model and tool to assess cognitive presence. American Journal of Distance Education 15(1): 7-23. Henri, F. (1991) Computer conferencing and content analysis. In C. O'Malley (ed.), Computer supported collaborative learning. Heidelberg: Springer-Verlag. Lea, M. (2001). Computer conferencing and assessment: New ways of writing in higher education. Studies in Higher Education, 26(2), 163-181. Newman, D.R., Johnson, C., Cochrane, C., & Webb, B. (1996). An experiment in group learning technology: Evaluating critical thinking in face-to-face and computersupported seminars. IPCT-J, 4(1), 57-74. Newman, D. R., Webb, B. & Cochrane, C. (1995). A content analysis method to measure critical thinking in face-to-face and computer supported group learning. IPCT-J, 3(2), 56-77. Authors: Graham J. Passmore, Assistant Professor, Faculty of Education, Lakehead University e-mail: gpassmore@lakeheadu.ca Ellen Carter, and Tom O Neill, Brock University, Brock University (Received: 22.7.04, accepted 6.9.04, 6 weeks, revised 26.10.04) 15