A number
of four-point rubrics were developed to analyse the open-response questions, the artefacts and the
presentations/discussions (0 = naive, 1 = emergent, 2 = informed, 3 = developed). At the time of the pre-test,
there was a range between 0 and 2 of understandings of inquiry-based instruction and of 1 and 3 of
understandings of ICT instruction. The mean score was 0.9 for inquiry, and 1.3 for ICT’s. By the time of the
post-test, both ranges had shifted, and mean scores rose to 2.1 for inquiry, and 2.5 for ICT’s. The item that
contributed most to the improved post-test score for inquiry was “Do you think inquiry-based instruction is
worthwhile in the classroom? Unlike the pre-test, most responses included a form of definition of inquiry that
went beyond ‘cook-book labs’ to include questioning and problem posing. Likewise, ICT responses indicated
learning had moved beyond the use of word processes to the use of technologies that could be used to represent
learning.
A numberof four-point rubrics were developed to analyse the open-response questions, the artefacts and thepresentations/discussions (0 = naive, 1 = emergent, 2 = informed, 3 = developed). At the time of the pre-test,there was a range between 0 and 2 of understandings of inquiry-based instruction and of 1 and 3 ofunderstandings of ICT instruction. The mean score was 0.9 for inquiry, and 1.3 for ICT’s. By the time of thepost-test, both ranges had shifted, and mean scores rose to 2.1 for inquiry, and 2.5 for ICT’s. The item thatcontributed most to the improved post-test score for inquiry was “Do you think inquiry-based instruction isworthwhile in the classroom? Unlike the pre-test, most responses included a form of definition of inquiry thatwent beyond ‘cook-book labs’ to include questioning and problem posing. Likewise, ICT responses indicatedlearning had moved beyond the use of word processes to the use of technologies that could be used to representlearning.
การแปล กรุณารอสักครู่..
