Pages

Sunday, October 24, 2010

Analyzing Students' Interviews

As it was stated in the data collection plan, a semi-structured interview (Burns, 1999) was chosen in order to get to know students' perceptions about the project, whether or not students were able to recall one of the concepts learned during the implementation, identify the reading strategies students will be more likely to purse when reading a text, and listen to students' recommendations and suggestions for further projects.


  • 1) What was the reading project about?
  • 2) What did you like about it? What didn't you like about it?
  • 3) Could you explain a concept you have learned?
  • 4) What reading strategies did you develop?
  • 5) If you had to read a text, how would you read it, what would you do, why?
  • 6) What recommendations do you have for further projects?



Taking into account that the results of the cloze tests allowed me to place students in three different reading levels: independent, instructional, and frustration; the information gathered from the students' interviews was organized according to those levels with the purpose of clearly establish a difference since the qualitative point of view among the three proposed reading levels.
Click here to have access to the interview chart.







In terms of content
Independent Reading Level: students are able to accurately describe a concept
Instructional Reading Level: students try to explain a concept, but lack accuracy
Frustration Reading Level: students are unable to explain a concept or just do not remember it

In terms of reading strategies
Independent Reading Level: students have an ample range of reading strategies

Instructional Reading Level: students have a limited number of strategies and seem to repeat ineffective ones
Frustration Reading Level: students do not recognize a strategy or repeat ineffective ones

In terms of Thinking Skills
Independent Reading Level: students are able to identify what the project was about and relate both language and content

Instructional Reading Level: students identify one element of the project, whether language or content
Frustration Reading Level: students associate the project to topics studied in class


The aspects aforementioned highlight how progress in terms of content development and cognition is achieved as students move from one level to another. This information is valuable to the teacher as s/he can plan activities address to strengthen the weaknesses of students' place at each of these levels.

Reference
Burns, A. (1999). Collaborative action research for English language teachers. Cambridge: Cambridge University Press.

Sunday, October 17, 2010

Correlation between CLOZE and CARI Tests



Defining what Correlation means
According to the dictionary correlation refers to "the degree to which two or more attributes or measurements on the same group of elements show a tendency to vary together". When a correlation between two variables is established, the researcher can assert that there is a relationship between then: if one variable changes, then the other will also change.

The correlation coefficient corresponds to a statistical value, ranging from negative one (-1) to positive one (1), which describes such relationship. The closer this coefficient gets to 1, the stronger the relationship between the two variables. It is important to warn the reader that correlation does not mean causation. In other words, if the correlation coefficient shows a strong relationship between two variables, the researcher can not argue that one variable causes the other or vice-versa. In addition to find out the correlation coefficient, the researcher needs to verify through a test the significance of that correlation. For the purposes of this study, the program SPSS was used to calculated both the correlation coefficient and its significance.

Testing the Relationship between the Students' scores for CARI Test and the Cloze Test
■ Pre-tests


According to the results of the SPSS program there is a strong significant relationship between these types of tests. Pearson Coefficient equals to 0.733; two-tailed critical value equals to 0 which is less than the alpha significance level (0.05)  

■ Post-tests


According to the results of the SPSS program there is a positive significant relationship between these types of tests. Pearson Coefficient equals to 0.539; two-tailed critical value 0.008 which is less than the alpha significance level (0.05)


Interpreting the Results
  • There is a positive relationship between the CLOZE test results and the CARI test results, which means that if a person does well on one test, he/she will in average do well on the other test.
  • Learners whose reading level according to the CLOZE test corresponds to frustration (below 43% of correct responses) during the pre-stage also achieve low results in the CARI test.
  • The correlation between these two type of tests weakens from the pre-stage to the post-stage; indicating therefore, that there is much more variability among the results at the end of the project. In other words, learners whose reading level according to the CLOZE test corresponds to frustration (below 43% of correct responses) during the post-stage might achieve competent results on the CARI test. Considering that the CLOZE tests determines, in my opinion, a linguistic ability, this result might indicate that under the CLIL framework learners with a low linguistic ability might achieve competent reading comprehension levels, if that comprehension is measured through open questions and the reader has a set of reading strategies.
  • If the Pearson coefficient (r) is squared and written as a percentage, pre-stage = 54%, post-stage = 29%, it can be concluded that the 54% of the variability observed in the CARI test can be explained by the results on the CLOZE test during the pre-stage. On the other hand, 29% of the variability observed in the CARI test can be explained by the results on the CLOZE test during the pre-stage. This information might indicate that during the post-stage there were other variables besides linguistic ability contributing to the results of students reading comprehension in a content area such as motivation, awareness of different reading strategies, and learners' training.

    Useful Links

Sunday, October 10, 2010

Analyzing Students' CLOZE and CARI Test


Defining Basic Concepts
■ Cloze Test
A passage of 100 words is selected and then every fifth word is deleted. The first and last sentences of the whole text are left in their entirety (Sejnost, 2007).
Please click here to see a sample.

■ CARI Test
Test divided into three sections: a) knowledge about reading aids offered by the textbook, b) specific content vocabulary, and c) explicit and implicit information (Sejnost, 2007).
Please click here to see a sample.


Analyzing Results
■ Cloze Test
According to Chatel (2001) Cloze tests allow the researcher to identify three reading levels  depending on the percentage of correct responses. If you want to have more information about what cloze tests show, please click here.

1) Independent Reading Level (58% to 100%)
2) Instructional Reading Level (44% to 57%)
3) Frustration Reading Level (0% to 43%)

The graph below shows the percentage of students within each level after the pre-test and post-test. It is evident that as the result of implementing different activities to strengthen students' reading comprehension, the results are positive. The number of students at frustration level decreased by 50%, while the number of students at instructional and independent levels increased by 150% and 30% respectively.

In addition to use descriptive statistics, inferential statistics help the researcher know if the difference between two means is statistically different. To do so the researcher must establish a hypothesis, run a t-test, and then decide whether to reject or confirm the prior hypothesis. Inferential statistics add validity to the study, since clear differences between two different set of data are established. I use the program SPSS to run a t-test using the data coming from the pre and post test.

The decision after running the t-test was to reject the null hypothesis (any difference between the two means is due to sample error) since p < 0.05. In conclusion, the average post-score (63) significantly exceed the average pre-score (47).  

■ CARI Test
According to Sejnost (2007) a Content Area Reading Inventory Test (CARI) help teachers understand to what extent students are successful at learning content area topics when reading. Taking into account that this type of test measures vocabulary, and understanding of implicit and explicit information; the results obtained can help the teacher assess what type of skills need to be further developed.

Considering the above points, the information was analyzed to determine to what extent each of these three skills was developed. The graph below summarizes the results. As it can be seen, students improved in understanding vocabulary, explicit and implicit information. The greatest change is observed in the vocabulary skill, from an average of 52% to 88%.

In addition, inferential statistics were used to analyze if the results of the pre-test and post-test, taking all the elements together (questions addressing vocabulary, implicit & explicit information) are statistically different.

The decision after running the t-test was to reject the null hypothesis (any difference between the two means is due to sample error) since p < 0.05. In conclusion, the average post-score (69) significantly exceed the average pre-score (47).


References

Chatal, R. (2001). Diagnostic and Instructional Uses of the Cloze Procedure. The Nera Journal, 37 (1), 3-6.

Sejnost, R. (2007). Reading and Writing across Content Areas. Thousand Oaks, CA: Corwin Press.

Friday, October 1, 2010

Analyzing Skype Logs

Image retrieved from 

Besides my journal, another instrument that provided me with qualitative information was the Skype Logs  I collected since the implementation process began.

As Saxton (2008) argues the new emerging technologies offer people a variety of tools such as blogs and Skype to add to our professional development. This author clarifies that “the new Web is participatory, with information flowing in all directions rather than simply from author to reader.” On this regard, these web tools give us the possibility to interact with each other, learn from each other, and construct knowledge through that interaction.

In action research, I see great potential of using Skype as a data collection tool and as an evidence of the ongoing reflection process that characterizes action research. As Burns (1999) claims action research is “evaluative and reflective as it aims to bring about change and improvement in practice, it is participatory as it provides for collaborative investigation by teams of colleagues, practitioners, and researchers” (p.30).


Taking into account the above premises, I used Skype during this research to maintain an ongoing dialogue with my research director fostering in this way the reflection component of the action research cycle and as a data collection tool to gather data about my views on the whole process; these views were validated, confirmed or enlightened by the research director’s views.

Please click here to have access to the coding of the conversations I held with my thesis director throughout the implementation process.


References:
Saxton, E. (2008). Information Tools – Using Blogs, RSS, and Wikis as Professional Resources. Young Adult Library Services 024-294. Chicago: Young Adult Library Services.

Burns, A. (1999). Collaborative action research for English language teachers. Cambridge: Cambridge University Press.