Pages

Sunday, October 10, 2010

Analyzing Students' CLOZE and CARI Test


Defining Basic Concepts
■ Cloze Test
A passage of 100 words is selected and then every fifth word is deleted. The first and last sentences of the whole text are left in their entirety (Sejnost, 2007).
Please click here to see a sample.

■ CARI Test
Test divided into three sections: a) knowledge about reading aids offered by the textbook, b) specific content vocabulary, and c) explicit and implicit information (Sejnost, 2007).
Please click here to see a sample.


Analyzing Results
■ Cloze Test
According to Chatel (2001) Cloze tests allow the researcher to identify three reading levels  depending on the percentage of correct responses. If you want to have more information about what cloze tests show, please click here.

1) Independent Reading Level (58% to 100%)
2) Instructional Reading Level (44% to 57%)
3) Frustration Reading Level (0% to 43%)

The graph below shows the percentage of students within each level after the pre-test and post-test. It is evident that as the result of implementing different activities to strengthen students' reading comprehension, the results are positive. The number of students at frustration level decreased by 50%, while the number of students at instructional and independent levels increased by 150% and 30% respectively.

In addition to use descriptive statistics, inferential statistics help the researcher know if the difference between two means is statistically different. To do so the researcher must establish a hypothesis, run a t-test, and then decide whether to reject or confirm the prior hypothesis. Inferential statistics add validity to the study, since clear differences between two different set of data are established. I use the program SPSS to run a t-test using the data coming from the pre and post test.

The decision after running the t-test was to reject the null hypothesis (any difference between the two means is due to sample error) since p < 0.05. In conclusion, the average post-score (63) significantly exceed the average pre-score (47).  

■ CARI Test
According to Sejnost (2007) a Content Area Reading Inventory Test (CARI) help teachers understand to what extent students are successful at learning content area topics when reading. Taking into account that this type of test measures vocabulary, and understanding of implicit and explicit information; the results obtained can help the teacher assess what type of skills need to be further developed.

Considering the above points, the information was analyzed to determine to what extent each of these three skills was developed. The graph below summarizes the results. As it can be seen, students improved in understanding vocabulary, explicit and implicit information. The greatest change is observed in the vocabulary skill, from an average of 52% to 88%.

In addition, inferential statistics were used to analyze if the results of the pre-test and post-test, taking all the elements together (questions addressing vocabulary, implicit & explicit information) are statistically different.

The decision after running the t-test was to reject the null hypothesis (any difference between the two means is due to sample error) since p < 0.05. In conclusion, the average post-score (69) significantly exceed the average pre-score (47).


References

Chatal, R. (2001). Diagnostic and Instructional Uses of the Cloze Procedure. The Nera Journal, 37 (1), 3-6.

Sejnost, R. (2007). Reading and Writing across Content Areas. Thousand Oaks, CA: Corwin Press.

Friday, October 1, 2010

Analyzing Skype Logs

Image retrieved from 

Besides my journal, another instrument that provided me with qualitative information was the Skype Logs  I collected since the implementation process began.

As Saxton (2008) argues the new emerging technologies offer people a variety of tools such as blogs and Skype to add to our professional development. This author clarifies that “the new Web is participatory, with information flowing in all directions rather than simply from author to reader.” On this regard, these web tools give us the possibility to interact with each other, learn from each other, and construct knowledge through that interaction.

In action research, I see great potential of using Skype as a data collection tool and as an evidence of the ongoing reflection process that characterizes action research. As Burns (1999) claims action research is “evaluative and reflective as it aims to bring about change and improvement in practice, it is participatory as it provides for collaborative investigation by teams of colleagues, practitioners, and researchers” (p.30).


Taking into account the above premises, I used Skype during this research to maintain an ongoing dialogue with my research director fostering in this way the reflection component of the action research cycle and as a data collection tool to gather data about my views on the whole process; these views were validated, confirmed or enlightened by the research director’s views.

Please click here to have access to the coding of the conversations I held with my thesis director throughout the implementation process.


References:
Saxton, E. (2008). Information Tools – Using Blogs, RSS, and Wikis as Professional Resources. Young Adult Library Services 024-294. Chicago: Young Adult Library Services.

Burns, A. (1999). Collaborative action research for English language teachers. Cambridge: Cambridge University Press.

Tuesday, September 14, 2010

Analyzing the Teacher's Journal Information

Image retrieved from

Analyzing is a challenging endeavor. As a novice researcher, I have found quite hard trying to capture the essence of data. Everything seems important since every little piece of information opens a window to start reflecting upon your teaching practice. As Auerbach (2003) points out your research concerns should be kept visible so that the researcher focus his/her attention on relevant text. Following this author's advice, I used a mapping strategy to organize relevant text and start grouping it according to emerging patterns. However, sometimes you get so immersed in the data that you start walking away from your research purposes. Burns (1999) talks about dialogic validity as a criterion that supports the study validity, and in addition, offers the researcher the possibility to test his/her hypothesis, find out new approaches to the data, and argument his/her points of view. She refers to "peer-review" as a powerful tool that encompasses a reflective dialogue with a "critical friend". In my case, my "critical friend" is my research director. 


After showing him the first version of the map I did and establishing a reflective dialogue, I refined it and came up with a more depurated map that shows key concepts around which the analysis will continue.

Please click here to have access to the second version of the map that groups emerging patterns and ideas taken from the teachers' journal.


References
Auerbach, C. F. (2003). Qualitative Data : An Introduction to Coding and Analysis. New York : NYU press.


Burns, A. (1999). Collaborative action research for English language teachers. Cambridge: Cambridge University Press.

Wednesday, September 8, 2010

Analyzing Qualitative Data

Dear reader, I invite you to watch this short video that explains the methodology proposed by Auerbach in relation to categorizing qualitative data and developing a theoretical narrative.


Create your own video slideshow at animoto.com.

After going through steps 1 to 4, I designed this map to visually see how repeating ideas are connected and start naming these related events.

Reference
Auerbach, C. F. (2003). Qualitative Data : An Introduction to Coding and Analysis. New York : NYU press.