• If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

Balci-Article Critique 3

Page history last edited by sebiha 7 years, 6 months ago

Article: Domínguez, A., Saenz-De-Navarrete, J., De-Marcos, L., Fernández-Sanz, L., Pagés, C., & Martínez-Herráiz, J. J. (2013). Gamifying learning experiences: Practical implications and outcomes. Computers & Education, 63, 380–392. doi: 10.1016/j.compedu.2012.12.020

 

Summary of the Article

     Dominguez et al. (2013) investigated the impact of some gamification mechanisms, which were trophies, achievement medals (badges) and leaderboard, on the student motivation.  They started their introduction section by talking about the effects of video games on the educational research, such as research about how to apply appealing videogames features for players into education to improve student motivation and engagement. Then, they quickly moved to topic of gamification, which is a new trend in educational research. Gamification could be defined as use of game mechanics in non-gaming educative contexts and there was little work about it. Thus, the authors aimed with this study to provide empirical evidence to understand effect of different gamification mechanisms on student motivation in online learning environment. According to earlier research, videogames are appealing to players as games affect the cognitive, emotional and social areas of players. The authors thought that their gamification system should also be designed by taking into consideration of cognitive, emotional and social areas of learners. For this study, they created a gamification system to increase motivation of students to complete optional exercises for a university level basic computer skill course. Hence, for the cognitive area their gamification system provided topics and skills, which should be mastered in the course, in a hierarchical tree composed of four levels. They arranged this hierarchy in a way that the difficulty level was increasing. They included copper, silver, gold and platinum trophies, which were awarded according to different difficulty levels, into the system. As for the emotional area, they included achievements as a reward for task completion, since the authors thought that awarded achievements would create positive emotions in learners. They also designed some hidden achievements in the system to create the feeling of surprise for the users. Besides, they used the leaderboard for the social area. Students were ranked in the leaderboard according to number of achievements they earned. In the leaderboard, students can see ranks, numbers and percentages of achievements collected by all students.

     They implemented gamification plug-in into Blackboard to the “Qualification for users of ICT”, a course which should be taken by students from different majors. Two classes were chosen and randomly assigned as the experimental group and the control group. Both groups have separated virtually and also physically as two classes were taught in different campuses. The gamified version of the course had 36 trophies for completing the exercises and 7 achievements for the participation. All course content and exercises were same for both groups. Students in the experimental group had a choice to use gamified version of the class, the traditional version or both version. Traditional format of the class included exercises as PDF files. 58 students out of 123 students in the experimental group registered to the gamified version of the course, while there were 73 students in the control group. As exercises are optional, they are not evaluated in the traditional version of course and in the control group’s class. There were 5 modules: introduction to the computer (module 1), word processor (module 2), spreadsheets (module 3), presentation software (module 4) and databases (module 5). Scores of both groups were compared for the following evaluation items: initial activity, midterm assignment (module 2 and 3), final assignment (module 4 and 5), final examination (a written test composed of multiple choice and open-ended questions), participation and overall final course score.

     They analyzed the results with the independent-2-sample t-tests. It was found that there was no significant difference in the assessed initial activity, which showed the homogeneity of the groups in terms of prior knowledge. They found significant differences in six measures at the end of the course. While the experimental group scored significantly higher in the initial activity (p = .004), spreadsheets (p=.007), software presentation (p= .000) and databases (p=.000), they scored significantly lower on the final examination (p=.006) and on the participation (p=.000) than the control group. Finally, there was no significant difference between both groups for word processing (p=.090) and on the overall final course score (p=.090). They further analyzed the data by dividing the experimental group into two subgroups: the experimental non-gamified group and the experimental gamified group. They conducted one-way analyses of variance (ANOVA) to look at differences between two experimental subgroups and the control group. They reported the confidence intervals for overall final course score, the final examination score and the participation score for three groups. It was found that students who earned at least 6 achievements in the gamified system got significantly higher final scores. Non-gamified experimental group got significantly lower scores than the other two groups for the final examination score and for the participation score.

     They collected several different surveys from the different group of students. First survey was the attitude survey, which was composed of 10 Likert scale questions and it was given to the experimental group students who followed gamified version of the class. Forthy five students answered the survey. The average mean for these questions was 3.64 on the five-point scale, so there were positive attitudes of students toward the gamification system. They also collected another survey from 57 experimental group students who reported not to use the gamified system. When they were asked about the reasons for not using the gamification version of the exercises, time unavailability and technical problems were most frequent reasons given by those students. Some students also reported their dislike about the gamification system, for example they complained about the competition created by the leaderboard tool.

The authors concluded from the results that gamification system implemented into e-learning environment could have potential to increase student motivation, however the design and implementation of the gamification system require lots of efforts from the designers. They also mentioned that their gamification system was successful to create emotional and social impacts on students, since some students reported being motivated by trophies, achievements and leaderboard. On the contrary, cognitive impact of their gamification system over students was not very significant due to the similar performances of students in both groups in some of the measures.

 

Evaluation of the Article

     The introduction section of the article was well written and it was comprehensive enough to explain the purpose of the study. The information in this section was also important to understand the authors’ underlying rationales when designing their gamification system. However, the rest of the article was written poorly and it was not well organized. The titles used in article were not following the usual article formats, such that there were no separate method and discussion sections. The pieces of information about the participants, procedures and their design of gamification system were scattered throughout the article, so that readers had to combine those piece together which made difficult to comprehend the whole picture.

     I appreciated their efforts to implement gamification tools with overcoming multiple problems they faced with Blackboard. Their design of trophies and achievement medals (badges) were good. However, they used students’ real names in the leaderboard. Most of the leaderboard studies use pseudonyms for participants for the privacy issues and the authors should have provided more privacy to the students in this issue. As 58 students out of 123 students from the experimental group signed for the gamified version of the course, this could be one reason for low enrollment rate for students. This was a web-enabled face-to-face course that students were also confronting each others in the class and some students might not want their classmates to see their ranks in the leaderboard. Also, some students already reported their negative emotions towards to the leaderboard in surveys. Thus, they should have designed the leaderboard tool better.   

     The article has some important problems. First, they did not provide any information about participants, such as gender and age. Second, it was not clear what they meant by “initial activity”. They wrote in the article that “The initial activity (week 1) is designed to introduce the course to the students, to get them used to the course stuff and class dynamics…. to fill two surveys about their knowledge and their usage of ICT, and also to complete a short interactive test to assess their initial knowledge about modules 2–6 (word processor, spreadsheets, presentation software and databases)” (p. 385). However, they reported two different results for the initial activity in the result section in Table 1 and Table 2. They found insignificant results for the initial activity for both groups in Table 1. But after that, they reported in Table 2 that experimental group significantly scored higher in the “initial activity” than the control group, p=.004. I had to read several times to understand this obvious conflict but I could not find an answer for it. They even reported different subject numbers in the tables: control group n=62, experimental group n=111 in Table 1 and control group n=73, experimental group n=123 in Table 2. Third problem is they used confidence intervals instead of post hoc test values to report the differences found in one-way ANOVA between three groups of the experimental non-gamified group, the experimental gamified group and the control group. As there are three groups and as they found significant difference between groups, they had to conduct post hoc test to report which groups are significantly different.  As far as I know, we need to report F test value and post hoc test values as main findings and confidence intervals should be reported as an additional supporting information. Fourth, their way of reporting results of surveys was complicated and at some point it was difficult to follow which survey result they were talking. They should have organized their representation of survey results better. There was too much information from the surveys that they could even publish them as a separate work.  Fifth, as they stated “The main objective of this plugin is to increase student motivation towards completing optional exercises through the use of rewards and competition mechanisms” (p. 382). Hence, they designed gamification plugin to motivate student to complete optional exercises and they tracked interaction of 58 students with the system, who signed up for gamified version of the class. However, they never collected information from control group students if they completed the optional exercises or not. Instead, they compared both groups’ scores from initial activity, midterm assignment, final assignment, final examination, participation and overall final course score. Hence, they actually investigated if the gamification tools increased/affected “performance” of experimental group students instead of “motivation”.

     Although this study had serious problems, the practical implication found by the study was important. As the authors found that experimental group performed better at practical assignments (initial activity, midterm assignment and final assignment) and performed worse at the written test (final examination) and participation; they concluded that gamification system would be more useful for developing practical competences but may not be so useful for development of understanding the underlying theoretical concepts. Hence, course content should be appropriate to gamify or at least instructors should use gamification tools for appropriate part of course materials.

Comments (1)

Chip Ingram said

at 1:37 pm on Oct 26, 2016

This was a good critique. You described the study well and had some important issues with it.

You don't have permission to comment on this page.