Name: Jimmy Winfield and Carla Fourie
Course: ACC2011S & ACC2111S - Financial Reporting 1
Faculty: Commerce
Level: 1st year
Category: Expanding/Enhancing/Adapting
One sentence summary: Three types of assessments (knowledge checks, weekly quizzes, and class tests & exam) were run fully online using Vula’s Test & Quizzes tool and Assignments. The assessments involved a substantial deviation from previous formats, but attempted to assess the same learning outcomes.
Download

 

Context: Financial Reporting 1 is a first year, second semester, course within the Chartered Accounting programme. ACC2011S is the course run in the mainstream programme; while ACC2111S is the course run within the Educational Development Unit. In ERT, these two courses were collapsed into one, and taught through a single Vula site. Combined together, the class size came to approximately 950.

Purpose: In ERT, all assessments were administered through the Vula assessment tools. There were three types of assessments: knowledge checks (for 5 or 10 points each), weekly quizzes (for 10 marks each), and class tests (75 marks each) & exam (120 marks). Knowledge checks are short quizzes that were asked after each learning activity. Using the prerequisite function on Vula’s Lesson page, the knowledge checks were set up in such a way that students needed to obtain a minimum number of points in order to proceed to the next learning activity. These short quizzes were not graded and served a formative function. 

The weekly quizzes assessed several learning activities that involved the application of concepts and skills. The quizzes were designed to support students’ learning as well as function as regular checks to assess whether students were on track. They served a formative and diagnostic function, although marks were assigned for motivational purposes. 

The two class tests and the exam served a summative function, and the tests also served a diagnostic function. All three of these assessments utilised both Test & Quizzes and Assignments on Vula. With the Test & Quizzes tool, the autosave function was purposefully selected so that answers that were typed into the textbox would be automatically saved. Questions administered through the Assignment tool would require students to download the question, answer it, take a picture of their written responses and upload it.

Except for the knowledge checks, the tests & exam and the weekly quizzes were new forms of existing assessments in the previous face-to-face iterations of the course. The questions were redesigned to take into account the constraints of online assessment tools. For instance, to reduce the risk of student collusion, multiple versions of questions were created within the question pool, and Vula’s calculated question tool was also used extensively. The assessments were redesigned as open book assessments.

Process: A general rule guiding the design process of the assessments was that, if a trade-off was observed between rigour and risk of disadvantaging students with poor connectivity, favour was given to designing for the latter. For this reason, rather than limiting time, the exam was open for 24 hours and the weekly quizzes were open for a week. Students could take the weekly quizzes more than once, and automated feedback was provided after the close date. 

Preparation time was an important factor in the process. While time was saved with marking, a significant amount of time was required in relation to test setting and test administration. Test setting took longer because we had to consider what we wanted to ask in relation to what is allowed through the tools. 

The administration of the exams also demanded a significant amount of time, as we made ourselves available to assist with technical issues during the 24-hour period in which the exam was open. During the 24-hour period, we sent reminders to students highlighting 1) the amount of time that had passed and 2) the number of students that had completed the exam. A Whatsapp number was made available in which students could send a ‘please call me’. This was used by students who had data and connectivity issues. After the exam closed, more time was spent on making decisions around accepting late submission.

Outcomes/Lessons learned: The introduction of ‘knowledge checks’ was something positive that came out from this experience. This is something we will take forward in the future. With the weekly quizzes, although moving online meant that it no longer takes up tutorial time and the marking is automated, the risk of collusion outweighs the benefits. Moreover, by getting tutors to mark the quizzes, it trains them for marking for exams. In relation to exams, we are fighting a losing battle. One of the skills and competencies of this course is that we require students to recall formats and to be able to know which formats to use and what the format is communicating. This skill is lost in online assessments due to the design (open book) and the risk of collusion.

Overall, ERT provided a high quality teaching and learning experience for us. We have learnt about the teaching and learning aspects of the course in ways that we will be able to carry these forward even when we return to in-person. We now have a better understanding of how to use Vula to design Lessons pages as well as how to use the assessment tools. We learnt as much as we could about the Test & Quizzes tool and the question types – to the extent that Jimmy has developed an approach to test calculated questions that are worth 35 marks. These questions are similar to the type of questions that we would ask in a paper-based exam. One of the weaknesses of calculated questions is that every answer in a  question earns the same amount of marks. Jimmy developed a technique that allows for manual adjustments to the auto-marking to reward effort proportionately. We have been much more experimental and creative about how we use the test tools in order to push the envelope and make for better assessments. 

Recommendations: Online assessments are relatively straightforward for formative assessments – the knowledge checks were effective for testing understanding after each learning activity. We faced some challenges with summative assessments, however, where there is risk of collusion and needing more complex question types. Creating online assessments is a learning curve – the more you do it, the better you get at it.