My goal this year is not to use the photocopier and, thus far, I have met that goal. The two hurdles I anticipate are full period tests and mid-terms, the latter of which, to be frank, I’m not sure I’ll clear. I did, however, attempt my first full period test on Friday, with mixed-type questions and some experiments, and it went better than expected.
The test was for my Latin 3/4/5 course and covered two grammatical topics as well as the first fifteen or so lines of Ovid’s Apollo and Daphne. Traditionally, in a paper test, students would receive a copy of Ovid’s text for reference that would also have words and sections highlighted (italics and underline) for reference to specific questions. Students would also receive a vocab sheet. Right there, that presented a problem: juggling a vocab sheet, text, and the questions, seemed unwieldy. The solution? Create a graphics file of the text (via screen shot) and upload it to each question (why a graphics file rather than a text file below). For the vocab, I did upload a separate file, but showed students how to toggle quickly between applications using command-tab.
I also was concerned about cheating, especially with the angles that the screens present. The class, however, is in a double-row U, so the solution was to have the back row turn their desks around, thus putting everyone back to back. It also seemed that, because there is only one question on the screen at the same time, the timing of cheating would be harder (as opposed to a paper test where the whole test, or at least large portions of it, are visible at one time).
I had had a discussion with a colleague (HP) in preparation for the test about the efficacy of multiple choice questions, and she echoed a sentiment that I had shared, namely that the integrity of a question is undermined if there is the possibility of guessing it right. I had thought that but the advent of electronic assessment (beginning with Promethean’s ActivVote) forced me to do otherwise. I still think that to some extent but am willing to sacrifice that for the benefits of e-multiple choice (instant feedback). But our discussion got me thinking about that issue some more and if there was a way around it. ItsLearning quiz module does in fact provide a work around.
ItsLearning has a question type called hotspot in which the teacher uploads an image and then highlights an area of that image as the right area. Students then have to click within the correct area to get it right (imagine a map quiz; you’d upload the map and highlight, say Pennsylvania on the map; students would have to click in PA to get the PA question right). I figured I could use this feature for some of the grammar questions (and even improve upon them). Here’s what I did:
- I uploaded the graphics file of the text to a hotspot question.
- I unchecked the box to scale the image to the size of the space (the hotspot question has a fixed space in which the graphic appears; if it is scaled down, it becomes very small; with the box unchecked, students have to scroll through the image but at least can see it).
- I then wrote the a question and highlighted the correct word on the graphics file of the text, e.g. In line 452, what word does Peneia agree with? The word Daphne is highlighted, and students have to click on that word to get it right.
- I saved the question, and then copied and pasted it 15 times. This prevents me from having to upload the image each time.
- To change the highlighted area, click the highlight and then push the clear button; then highlight the new area.
The other questions were sort, for ordering forms correctly, multiple choice, either/or, and paragraph (for the translations).
Some issues that came up during the test.
- The copying and pasting sometimes resulted in missed changes (e.g. line #s), so that it would say 452 when it should say 455.
- Taking the test in Chrome, with its swipe-back function (i.e. if you swipe to one side or the other it will surf to the previous web site or to the next if there is one), caused some students to lose work if they accidentally swiped off of the page (this was especially frustrating on the translation pages, which required the longest answers).
- The test took longer than I thought. I suppose I assumed that the multiple choice would move more quickly than their paper fill-in equivalents, but that might not have been the case. I did take about 10 mins at the beginning of class to go over it, and did create a dummy test for the hotspot question so the class could try it before the actual test, but it still surprised me that three or four were still working at the end of class.
- ItsLearning seemed to do well with student mistakes (hitting the wrong button, even closing the window); ITL seemed to save answers as students went, which was a nice feature for them.
We’ll see how the grading goes, both the open-ended themselves and the combination of computer-graded and me-graded.
Here’s my question, though. Clearly some students prefer paper tests. To what extent should I honor this preference? There are plenty of preferences I don’t honor; should technology receive special treatment? And what are the factors that go into making this decision?
I’ll keep this updated as I grade and hear more from the students.