Pages

Sunday, November 14, 2010

Conclusions and Further Research

Image retrieved from http://www.shutterstock.com/

General Conclusion
Overall, using intensive reading to decode information allows learners to develop conceptual knowledge. This process is mediated by language and by the strategies learners and teachers choose to approach a text and scaffold the learning process, respectively.  The CLIL framework provides the content teacher with key guiding principles that favor a) student-centered learning environments, b) the use of language strategies such as intensive reading, eliciting key vocabulary, and scanning and skimming, and c) the development of high and low order thinking skills. Finally, the students improve their language ability as a result of using the language in meaningful contexts.

Specific Conclusions
1) Reading needs to be approached as an interactive process that is embedded into meaningful activities addressed to challenge thinking.

2) Measuring reading implies a comprehensive view not only of reading but also of measuring.  Different types of test should be combined to guarantee that judgments about students’ reading comprehension are valid and reliable.

3) Students’ training on new strategies to organize information, understand a paragraph or summarize key information, is required if students are to be successful.

4) Students’ are better able to use the language as a result of taking into account both language goals and content goals when planning the lessons. The strategies used for the purpose of scaffolding the students’ language process were eliciting key vocabulary, reading with a purpose in mind, taking notes, and using graphic organizers to account for science concepts.

FURTHER RESEARCH
  • What are the effects of implementing language strategies to favor the development of speaking in a content-based classroom?
  • To what extent the use of graphic organizers to decode math information favors the developments of students' mathematical thinking?

Sunday, November 7, 2010

Putting Everything Together



As a result of the data analysis, common trends were organized into categories forming a skeleton that gives life to a story about fostering reading comprehension in the science classroom. The data was questioned, compared against each other, confronted against the theoretical constructs of the research, and examined through a magnifying glass to recognize those key elements that affect students' understanding when reading is implemented.

As a result of this process the following flow chart emerged to explain the effects of intensive reading to decode information in a science content-based class


Arguments are enough to say that reading was like a spring that pushed kids to ask, to inquire, to get to know more about what they were reading. Reading was a powerful input, which according to Gagne’s (1985) Information Processing Theory needs to be decoded to make sense out of it. Hence, developing students’ conceptualization corresponds to an eclectic process; reading should be combined with hands-on activities, videos, pre-reading activities, and post-reading activities; the science classroom needs to reflect an interactive learning experience. Reading is a dynamic process, a social situated practice, and should be measured by means of a comprehensive view. This process is more meaningful to students when they are able to share what they read. Regarding how to measure reading, as many authors have pointed out reading is an invisible act quite challenging to measure. However, using two types of tests to analyze to what extent students are developing reading comprehension skills may be useful. Teachers sometimes can get biased if just one type of test is used over and over.



Without a doubt, intensive reading favors cognition and content development. The quantitative analysis shows that this process is mediated by language as there is a positive correlation between the cloze and CARI tests meaning that people who have low linguistic ability on one test will likely have low overall reading comprehension. However, this dependence between language and content development weakens by the end of the implementation and reveals that other variables apart from language can explain students’ reading comprehension results. The qualitative analysis shows that this process is mediated by teachers’ and learners’ strategies.


When cognition and content are developed, students move from factual knowledge to conceptual knowledge. Three main points should be kept in mind: a) the role of vocabulary, b) grounding concepts, and c) the use of questions.


In conclusion, the strategy implemented was successful as students were able to develop their reading comprehension ability and at the same time increase their knowledge about states of matter, kinetic and potential energy, and types of waves. It is evident that through CLIL practices students are able to improve their language, thinking skills, and understanding of science concepts.

Reference
Gagné, E. D. (1985). La Psicología Cognitiva del Aprendizaje Escolar. Traducción de Paloma Linares. Madrid: Visor Distribuciones, S.A.

Sunday, October 24, 2010

Analyzing Students' Interviews

As it was stated in the data collection plan, a semi-structured interview (Burns, 1999) was chosen in order to get to know students' perceptions about the project, whether or not students were able to recall one of the concepts learned during the implementation, identify the reading strategies students will be more likely to purse when reading a text, and listen to students' recommendations and suggestions for further projects.


  • 1) What was the reading project about?
  • 2) What did you like about it? What didn't you like about it?
  • 3) Could you explain a concept you have learned?
  • 4) What reading strategies did you develop?
  • 5) If you had to read a text, how would you read it, what would you do, why?
  • 6) What recommendations do you have for further projects?



Taking into account that the results of the cloze tests allowed me to place students in three different reading levels: independent, instructional, and frustration; the information gathered from the students' interviews was organized according to those levels with the purpose of clearly establish a difference since the qualitative point of view among the three proposed reading levels.
Click here to have access to the interview chart.







In terms of content
Independent Reading Level: students are able to accurately describe a concept
Instructional Reading Level: students try to explain a concept, but lack accuracy
Frustration Reading Level: students are unable to explain a concept or just do not remember it

In terms of reading strategies
Independent Reading Level: students have an ample range of reading strategies

Instructional Reading Level: students have a limited number of strategies and seem to repeat ineffective ones
Frustration Reading Level: students do not recognize a strategy or repeat ineffective ones

In terms of Thinking Skills
Independent Reading Level: students are able to identify what the project was about and relate both language and content

Instructional Reading Level: students identify one element of the project, whether language or content
Frustration Reading Level: students associate the project to topics studied in class


The aspects aforementioned highlight how progress in terms of content development and cognition is achieved as students move from one level to another. This information is valuable to the teacher as s/he can plan activities address to strengthen the weaknesses of students' place at each of these levels.

Reference
Burns, A. (1999). Collaborative action research for English language teachers. Cambridge: Cambridge University Press.

Sunday, October 17, 2010

Correlation between CLOZE and CARI Tests



Defining what Correlation means
According to the dictionary correlation refers to "the degree to which two or more attributes or measurements on the same group of elements show a tendency to vary together". When a correlation between two variables is established, the researcher can assert that there is a relationship between then: if one variable changes, then the other will also change.

The correlation coefficient corresponds to a statistical value, ranging from negative one (-1) to positive one (1), which describes such relationship. The closer this coefficient gets to 1, the stronger the relationship between the two variables. It is important to warn the reader that correlation does not mean causation. In other words, if the correlation coefficient shows a strong relationship between two variables, the researcher can not argue that one variable causes the other or vice-versa. In addition to find out the correlation coefficient, the researcher needs to verify through a test the significance of that correlation. For the purposes of this study, the program SPSS was used to calculated both the correlation coefficient and its significance.

Testing the Relationship between the Students' scores for CARI Test and the Cloze Test
■ Pre-tests


According to the results of the SPSS program there is a strong significant relationship between these types of tests. Pearson Coefficient equals to 0.733; two-tailed critical value equals to 0 which is less than the alpha significance level (0.05)  

■ Post-tests


According to the results of the SPSS program there is a positive significant relationship between these types of tests. Pearson Coefficient equals to 0.539; two-tailed critical value 0.008 which is less than the alpha significance level (0.05)


Interpreting the Results
  • There is a positive relationship between the CLOZE test results and the CARI test results, which means that if a person does well on one test, he/she will in average do well on the other test.
  • Learners whose reading level according to the CLOZE test corresponds to frustration (below 43% of correct responses) during the pre-stage also achieve low results in the CARI test.
  • The correlation between these two type of tests weakens from the pre-stage to the post-stage; indicating therefore, that there is much more variability among the results at the end of the project. In other words, learners whose reading level according to the CLOZE test corresponds to frustration (below 43% of correct responses) during the post-stage might achieve competent results on the CARI test. Considering that the CLOZE tests determines, in my opinion, a linguistic ability, this result might indicate that under the CLIL framework learners with a low linguistic ability might achieve competent reading comprehension levels, if that comprehension is measured through open questions and the reader has a set of reading strategies.
  • If the Pearson coefficient (r) is squared and written as a percentage, pre-stage = 54%, post-stage = 29%, it can be concluded that the 54% of the variability observed in the CARI test can be explained by the results on the CLOZE test during the pre-stage. On the other hand, 29% of the variability observed in the CARI test can be explained by the results on the CLOZE test during the pre-stage. This information might indicate that during the post-stage there were other variables besides linguistic ability contributing to the results of students reading comprehension in a content area such as motivation, awareness of different reading strategies, and learners' training.

    Useful Links