홈으로ArticlesAll Issue
ArticlesThe Influence of Cognitive Styles and Gender on Visual Behavior During Program Debugging: A Virtual Reality Eye Tracker Study
  • Jason C. Hung1,* and Chun-Chia Wang2

Human-centric Computing and Information Sciences volume 11, Article number: 22 (2021)
Cite this article 5 Accesses
https://doi.org/10.22967/HCIS.2021.11.022

Abstract

This study utilized a VR eye-tracker to analyze visual attention as well as compare the differences in internal behavioral cognition during program debugging assignments. In this study, 40 students were recruited as participants who studied C++ programming language courses for at least one year in the department of computer science. Forty participants (27 men and 13 women) were categorized into field independence (FI) and field dependence (FD) groups based on their Group Embedded-Figures Test (GEFT) scores. Based on defined the regions of interest (ROIs), this studyused eye movement indicators to discussthe difference in visual attention between various ROIs during the program debugging task. The results indicated that different participants’ cognitive styles led to different gaze sequences of viewing ROIs; no significant difference was identified in students’ visual fixation on viewing the ROIs of programs between the FI and FD groups based on the latency of first fixation (LFF); a partly significant difference was identified in students’ visual fixation on viewing the ROIs of programs between the FI and FD groups based on the duration of first fixation (DFF); in the FI group, students showed longer fixation durations and fixation counts than their FD counterparts on viewing the ROIs of programs; different gender have different gaze sequences of viewing ROIs; a partly significant difference was identified in students’ visual fixation on viewing the ROIs of programs by gender based on the LFF and DFF; man students showed longer fixation durations than women students in viewing the ROIs of programs based on the total fixation duration (TFD); and based on fixation counts (FC), there were inconsistent comparison results in FC by gender in viewing the ROIs of programs.


Keywords

Virtual Reality, Eye Movement, Program Debugging, Visual Attention, Cognitive Processes


Introduction

According to [1], virtual reality (VR) has rapidly opened a new condition wherein a user experiences and interacts with a computer-mediated virtual environment. The application of VR has been experimented on in a myriad of fields, and education is no exception. When it comes to teaching and learning, VR technology is able to create an environment that presents instant, dynamic, and interactive scenes. In other words, VR can simulate scenes that cannot be operated or observed in real life; thus bridging the differences in user perception and enabling users to interact with virtual contexts and acquire more specific learning experiences to enhance learning interest and effectiveness by adopting appropriate software and hardware interfaces. For example, Liaw et al. [2] used VR to build an active virtual ergonomics learning system that allows learners to observe 3D virtual human organs to understand their functions, ways of operation, structures, and locations.
In recent years, “individual difference” has been a topic of great importance in the field of educational psychology. Influenced by cognitive psychology, the study of cognitive styles has been valued by researchers, who regard style as a characteristic of personal behavior and, at the same time, an individual’s style of coping that others can perceive. Gagne et al. [3] believed that one of the abilities related to students’ problem-solving skills is “cognitive strategies.” Cheng [4] also stated that, in problematic situations, individuals often engage in non-logical reasoning to solve problems based on their cultural background, similar past experience, and unique mode of thinking. Because each individual’s behavior style is different, it can be inferred that different learners’ adaptability or problem-solving ability is often considerably diverse; thus prompting many researchers to explore the reasons for such differences. Among various factors, cognitive style seems to be one of the reasons affecting individual learning differences. It is an individual’s unique personality, or, in other words, a habit of organizing, interpreting, and learning materials. It can be used to identify learners’ differences in cognitive activities. Different cognitive methods will cause individuals to have different reactions and solutions when they are facing problems. Previous studies believed that individuals’ unique cognitive styles affect users in the process of information seeking and reflect actual behavioral results [5, 6].
In addition, [7] showed in a cross-cultural survey that menhave higher self-efficacy with computers compared with women. Even with the same computer experience, menstill have higher levels of computer self-efficacy than women [8]. In addition, Ong and Lai [9] surveyed 156 employees from 6 engineering companies in Taiwan and found that men understand the use of digital learning systems more easily than women. Therefore, this study sought to explore the differences in cognitive styles involved in program debugging.
Many studies have shown different results, however. For example,Francis [10] found that gender difference in computer anxiety and self-confidence is not significant. Galpin et al. [11] reported that gender difference in computer self-efficacy exhibits different results. There are obvious differences between men and women among college students, but no significant gender differences between boys and girls among children. Lin et al. [12] also added that learners of different gender have different mental arithmetic abilities, working memory space, and problem-solving abilities in program understanding and debugging. However, the authorsof [12] only touched on the topics of gender differences involved in program debugging, and the interaction of multiple factors is not included. Therefore, this study attempted to bridge such gap in this field of research byfurther investigating the topic of program debugging.
To investigate program debugging comprehensively, eye-tracking technology has been used in exploring the psychological process based on recorded data in recent years when humansinteract with and process external information. It has mostly been used in research areas such as cognition, psychology, reading activities, and human-computer interaction [13-15].For example,Carrillo and Falgueras [13] conducted a quick and simple eye-tracking experiment to know the most frequent screen regions that the users were looking at with styles of direct manipulation (DM) and goals-guided interaction (GGI). In this study, we combined an eye-tracking technology with VR head-mounted display (HMD) equipment to produce a self-made VR eye-tracker to present 3D scenarios of programming languages for the sake of analyzing visual attention on the VR system. On the one hand, such design let participants actually experience the virtual environment through the VR deviceand use the handheld controllers to interact with objects in the virtual environment to achieve the effect of immersive experience. On the other hand, the use of eye-tracking technology enabled conducting real-time discussion of the characteristics of cognitive thinking and testingparticipants’ cognitive processes, while recording and analyzing the process of eye-tracking in order to explore different cognitive styles and gender differences under the VR sensory experience. Comparative analysis of eye movement and visual attention generated in the process of program debuggingwas presented; the present studyis expected to pioneer the integration of research, development, and application of VR eye-trackers, laying a solid foundation for and bringingnew insights to the development of new VR technology. Based on the purposes mentioned above, the study hoped to answer the following questions:

Are there any differences in viewing sequences for all regions of interest (ROIs) among participants with different cognitive styles when the program is debugged?

Are there any differences in viewing sequences for all ROIs among participants by genderwhen the program is debugged?

Are there any differences in visual attention among participants with different cognitive styles when the program is debugged? If so, what is the difference?

Are there any differences in visual attention among participants by genderwhen the program is debugged? If so, what is the difference?


Related Work

As VR eye-tracking technology is not yet mature and practical,this study will first summarize related workin the application of VR in education, theories related to programming learning, cognitive style and programming learning, and gender and programming learningas well as eye-tracking technology and program debugging.

Application of Virtual Reality in Education
A VR system allows users to be immersed in the learning environment. With the system’ssimulative and interactive features, VR can increase the interactivity between learners and materialsand provide learners with an immersive experience. The system also solves the issue of time constraint by enabling learners to practice repeatedly anytime. If VR can be added to traditional teaching activities, it can most likely provide learners with a more realistic, flexible, and better learning environment. Generally speaking, VR applications in education can be classified into two categoriesaccording to visualization and interaction devices [16]: non-immersive system, which is often called desktop VR and based on the displayed screens as it is a window for additional devices such as HMD [17]; and immersion system, which requires the user to wear data glove and HMD that tracks the user’s head movements and subsequently changes the view [18].
In response, [19] used VR to build an interactive, immersive online virtual human anatomy teaching system to improve the teaching of human anatomy.Hoffman et al. [20] applied VR technology to build a teaching system called Anatomic VisualizeR to aid in the teaching of clinical anatomy.Brenton et al.[21] believed that VR simulates the spatial relationship of human body structure, so a web-based virtual system of human body structure was designed to aid in the teaching of human anatomy. Bang et al. [22] proposed a new interaction technology based on infrared sensors and user postures not only to enable users to apply their intentions and actions to virtual life directly but also to increase user interest and sense of immersion. On the other hand, popular examples of desktop VR systems are video games that present the non-immersive system based on a screen containing only 3D display without any interaction [23].
In addition to education, training refers to either functional or skill training, including training in the school and the workplace. For example, in a fire disaster prevention course, Wang and Lin [24] used VR devices as a learning tool to develop a digital game as learning material for disaster prevention education and adopted the experiential learning cycle as a theory for improving learning effectiveness and motivation. Moreover, in some studies, researchers took full advantage of VR to examine whether users were exposed to a 3D VR wherein they can walk around and behave in a real world for conducting experiments related to food decisions [25]. In recent years, more and more empirical studies have been conducted on the effects of VR games in higher education, especially in learning programming in VR. For example, Pierre et al. [26] suggested that learning in VR games not only positively motivates students’ learning attitude in higher education but also positively improves college students’ learning in computer programming. Gallego-Duran et al. [27] created PLMan, a Pac-Man-like game to help students control a Pac-Man-like character using Prolog programs for correcting the problem in the Computational Logics course. They inferred that PLMan instantly gives the students visual and statistical feedback on their performance. Additionally, the effects of using PLMan in Computational Logics were overwhelmingly positive for programming education.

Studies on Cognitive Styles and Programming Learning
Cognitive styles refer to the habits and traits exhibited by individuals while performing problem solving, thinking, cognition, understanding, and information processing among other types of cognitive behavior. An individual’s cognitive style is constant and stable, cannot be drastically influenced by factors of time or environment, and can be observed through individual learning [28]. Kelly [29] believed that individuals are just like computers. They have different characteristics of receiving, storing, and processing external environments or events. When faced with a problem, individuals actively interpret and construct a set of theories or hypotheses for prediction and control. When individuals are stimulated by the outside world, they will transform their internal processes into explicit behavior through sight, hearing, and thinking. This transformation process is related to and consistent with the human cognitive structure [30].Messick [31] believed that the cognitive style is the individuals’ preferred way of organizing/processing information and experience. It is a neutral personality trait that is stable and difficult to change, affecting all behaviors of an individual in his or her daily life. In terms of general and education research, “field dependence/independence” is used as the classification standard for cognitive style. According to [32], field dependence (FD) is defined as “the extent to which a person perceives parts of a field analytically” (p. 275). On the other hand, they defined field independence (FI) as “the extent to which a person perceives part of a field as discrete from the surrounding field as a whole” (p. 275). Through this criterion, how learners reconstruct information in the learning environment based on the information they find can be observed [33].
Liu [34] conducted research on the relationship between cognitive style, emotional intelligence, and problem-solving ability. Likewise,[34] found that individuals’ problem-solving abilities vary as a result of different cognitive styles, and that students who are field-independent do not necessarily have superior problem-solving skills compared with field-dependent students. Hung [35] found in the research on personality traits and effectiveness of programming learning that learners who tend to think positively and uniquely have better learning outcomes in object-oriented programming learning. Such learners also perform better in interpersonal relationships with high level of achievement. On the other hand, prudent behavior and programming performance are significantly related. Therefore, factors such as logical thinking and personality traits will affect the learning effectiveness of programming. In a recent study, the authors [36] proposed an eye-controlled interactive reading system (ECIRS) instead of the use of the traditional mouse-controlled interactive reading system (MCIRS) to support screen-based digital reading. According to the analytical results of the experiment, ECIRS improves the reading comprehension of field-independent learners compared to field-dependent learners [37]. Currently, Kholid et al. [38] showed differences in critical thinking skills in terms of cognitive styles in solving mathematics problems. Subjects FI fulfill all of the critical thinking aspects such as interpretation, analysis, evaluation, inference, explanation, and self-regulation. Subjects FD fulfill only a number of indicators in critical thinking aspects. Yusnaini et al. [39] found that the performance of independent field students is higher than dependent fields in answering unfamiliar types of questions on accounting. Nonetheless, there is no difference in performance between the two cognitive styles when faced with familiar types of questions

Studies on Gender Difference and Programming Learning
In terms of learners’ cognitive processes and programming skills, gender difference is one of the most discussed topics. The findings of [40] showed that men perform better than women in computer-related tasks. Other research studies [10, 41] also suggest that, for computer learning, men also exhibit higher confidence, attitude, experience, and enthusiasm compared with women. Still, some studies suggest the opposite view, pointing out that there is no significant difference in programming performance between men and women (e.g., [42]). For instance, Lai [43] conducted a study of teaching different programming languages to elementary school pupils and concluded that gender difference did not influence the children’s learning achievement.
Recently, eye-tracker has been frequently used as an aid to process human physiological signals. For instance, Lin et al. [12] used eye-tracking technology to explore gender-related differences in terms of finer programming cognition. The study concluded that there is no significant difference between the two genders in terms of gaze fixation ratio in each program region. It is worth mentioning that this research further obtained the fixation data for sequence analysis, outlined that there is relevance in the switching among different ROIs, and then speculated on the participants’ cognitive processes during program understanding and debugging. This sequence analysis reveals that men are more accustomed to writing down numbers, but further calculation process is necessary for women to complete recursive debugging, further suggesting that women may have slightly less capability in mental arithmetic, working memory space, and problem solving. The results obtained by [12] show that achievements are not consistent in terms of the programming performance of different gender but remain rather inconclusive; hence the need for further investigation for a more comprehensive view. In addition to the previous study, the authors of [44] used eye-tracking as an objective measure to examine children’s learning processes of coding during block-based programming activities. The study found no statistically significant differences in gaze and learning gain between the two genders during coding activities. Quite interestingly, however, this qualitative research shows that there are significant differences in the implementation practices during coding. Moreover, the study sheds light on objective evidence that women students do not lack competence compared to their men counterparts. Instead, they simply have a different approach to the execution of coding activities.

Studies on Eye-Tracking Technology and Program Debugging
Crosby and Stelovsky [45, 46] were the earliest researchers to use eye-tracking technology in program reading and understanding in order to explore the relationship between cognitive styles, individual differences, and code reading patterns. The research also tried to understand the different paths of eye movement by novice and expert participants when they were trying to understand the program algorithm. The results showed that, when reading the binary search algorithms, experts focused more on the meaningful regions in the source code and the complex narratives, whereas novices focused more on annotations and comparisons. Both novices and experts paid little attention to keywords that are reserved words which identify a syntactic form, and there was no methodological difference in terms of strategies for reading. Because the study only provided the codes to participants, however, no static or dynamic visualization of the code was introduced. Other research—by comparing experts and novices—found that experts spent less time scanning the program before finding bugs than novices, and that experts focused on fixations over ROI to a greater degree whereas novices viewed program code lines more widely [47].
Program debugging is a fairly complex cognitive process and is defined as a systematic, thoughtful process of identifying program errors and using problem-solving strategies to locate and fix their cause [48, 49]. The subject needs to understand not only the rationale of the program but also its structure [50], not to mention that it is a time-consuming, mentally demanding task [51]. Thus, for beginner programmers, debugging is a burdensome and challenging task [52]. Some researchers [5355] chose to integrate screens with program editing software with eye-trackers to identify the mode of switching among different regions (code display region, program category correlation visualization region, and output result display region) or the difference in times of fixation and fixation duration among different regions when experts and novices were using editing software to debug and understand the codes. The results showed that, when it comes to program debugging or program understanding, the data for novices are significantly higher than those for experts in the program category correlation visualization region in terms of how many times of fixation are recorded or how long the fixation duration is. Moreover, experts had significantly higher frequency of switching among the three ROIs than novices; thus showing that experts are more active in understanding and integrating information among the three ROIs than novices.
Due to the small number of samples in this study, however, it is slightly insufficient in terms of statistical verification. To solve the problem of insufficient sample number, [12] performed sequence analysis by using the fixation information obtained to illustrate the relevance in switching among the ROIs, and then speculated on the participants’ cognitive process during program understanding and debugging. The results showed that low-performance students may have less working memory, leading to frequent computing and recording behavior and consequently to lower mastery of program knowledge. On the other hand, the understanding/debugging method of high-performance students is more logical, and they have more extensive programming knowledge. Thus, they have better command of the methods. Moreover, men are more accustomed to recording numbers, whereas women need to perform more calculation while carrying out recursive debugging and possibly have slightly insufficient mental arithmetic ability, working memory, and problem-solving ability.
Other studies also apply eye-tracker for data collection in order to generate rich information for mediating the relationship between students’ programming education. For example, Obaidellah et al. [56] reported a survey study on the use of eye-tracking in computer programming. The research points out that program comprehension and debugging are two research interests that have gained the most popularity in recent years [56]. On the other hand, Sun and Hsu [57] implemented an eye-tracking scaffolding system that tracked the eye movements (i.e., fixation positions and durations) of learners to evaluate their attention level by providing just-in-time hints as they worked on programming tasks. In another study, the authors [58] explored the influence of eye movement modeling examples on program comprehension and program reading in a classroom. In this study, researchers recorded the eye movements of an expert programmer when programming. The video was then used as a model example to help novice engineers improve their program comprehension and program reading competence. According to empirical studies, the results all indicate significant improvements in terms of the novices’ program comprehension competence.


Research Methodology

In this study, a self-made VR eye-tracker was used as the instrument. The VR eye-tracker collected and recorded the eye movements (quantitative data) of the participants with a view to exploring the differences in visual attention under the immersive experience of VR as they perform the program debugging assignment.

Participants
In this study, a total of 40 university students (27 men and 13 women)with average age of 20.6±3.2 years (range, 19–23 years) who have taken C++ programming language course(s) for at least 1 year at a department of information science in Southern Taiwan were recruited as participants. The participants’ cognitive styles were determined using the results of the cognitive style scale,Group Embedded Figure Test (GEFT) revised by Witkin et al. [59] as the data collection tool. The mean or average score distinguishes the cognitive style of a participant; participants who scored high are the FI type, whereas those who scored low are the FD type. After the participants were divided into two groups, the experiment on VR eye-tracking for program debugging was performed. In this GEFT test, the average score of the GEFT was 8.2 points, and the standard deviation was 6.4 points. As recommendedby [59], the cognitive style was distinguished by plus or minus one half of one standard deviation of the mean. Therefore, in this study, a GEFT of 11 points or highermeans that the participant is FI (n=16); a score of 5 points or less means that the participant is FD (n=16). Those who scored 6 to 10 points are intermediatestudents (n=8).

Stimuli
The Unity 3D development tool was used to create three VR programs for provision to participants in this study: iterative structures, recursive call structures, and function call structures. In each question, three or four grammar or semantic errors can be spotted. The task for the participant is to identify those errors without seeking help fromthe program editing software or any other form of aid. Participants then need to control a handheld controller to answer the question. To prevent scenarios wherein the participant simply guesses the answer from the name of the variables, variables were all named after simple symbols, with the ROIs divided according to the functions of the program [12], e.g., variable declaration, loop condition judgment formula, functional program operation, function call, etc. The definition of the ROIs was based on the recorded video output, and such definition process was conducted onlyafter the formal experiment. The red box represents the ROIs that need to be defined after the experiment and their names. Fig. 1(a) shows the VR classroom scene of the first program in the question. Fig. 1(b) presents the ROIs defined in the first question. The header file was defined as ROI1. The variable declaration in the main program and input/output functions were defined as ROI2. The conditional statement of the loop was defined as ROI3, and the if/else conditional expressionwas defined as ROI4. The first program has three bugs designed based on common misconceptions and difficulties.
The VR classroom scene of the second program is shown in Fig. 2(a), whereas Fig. 2(b) presents the ROIs defined in the second question. The header file was defined as ROI5. The variable declaration in the main program and input/output functions were defined as ROI6. The multiple subroutine call was defined as ROI7. The output of the main function was defined as ROI8. The variable declaration in the multiple subroutine was defined as ROI9. The if/else conditional expression in the multiple subroutine was defined as ROI10. The recursive call was defined as ROI11, and return in the recursive function was defined as ROI12. The second program has three bugs designed based on common misconceptions and difficulties.
The VR classroom scene of the third program is shown in Fig. 3(a), whereas Fig. 3(b) presents the ROIs defined in the third question. The header file was defined as ROI13. The variable declaration in the main program and input/output functions were defined as ROI14. The change subroutine call was defined as ROI15. The output of the main function was defined as ROI16, and the operating area of the change subroutine was defined as ROI17. The third program has four bugs designed based on common misconceptions and difficulties
Fig. 1. VR class scenario and defined ROIs of the Q1.
Fig. 2. VR class scenario and defined ROIs of the Q2.
Fig. 3. VR class scenario and defined ROIs of the Q3.

Design
The experiment design of this study was based on two different human factors: cognitive styles and gender. C++ program debugging was applied in the experiment, with VR eye-tracking technology adopted. The experiment aimed to explore the difference in cognitive process when participants debugged the program. In order to achieve such aim, the study divided 40 participants into two groups (FD and FI) according to the cognitive style scale, GEFT. The participants were asked to browse the original codes of the C++ program. During the browsing process, the eye movement data was recorded by a VR eye-tracker. The collected data were then analyzed and contrasted to determine whether different participants (with different cognitive styles and of different genders) showed significant differences in visual behavior. The variables in this experiment design and their relationships are shown in Fig. 4.
Fig. 4. The relationship between variables.
Instruments
Based on the objectives of this study, two instruments were used to proceed with the experiment: GEFT and VR eye-tracker.

Group Embedded Figures Test
The GEFT used in this study was proposed byWitkin et al. [59] to measure learners’ cognitive styles. This scale divides participants into FD and FI types. The GEFT is a speed and accuracy test, with a total of 8 simple pictures and 18 complex figures. A participant needs to find the simple figure specified in the title hidden in the complex figures. Participants must complete the tasks within the allocated time. The GEFT scores are calculated based on the number of correct answers, each of which is translated into one point with 18 as the highest possible score. Using the Spearman-Brown prophecy formula, the split-half reliability estimation of the scale has been determined to be 0.82 [59]. The higher the score is, the more the participant’s cognitive style tends to be FI; a lower score means the participant is more FD.

Virtual reality eye-tracking A self-made “VR eye-tracker” was used in this research. Attached to the tracker is an extra mini microscope to record images and video clips. The microscope was especially installed to record the condition of the pupils via infrared LED light. The screen update rate is 30 frames per second. The microscope lens was integrated into the VR device and was adjusted until it can capture the pupil movement properly. An image of the self-made VR eye-tracker is shown in Fig. 5. This eye-tracker is compatible with the Windows operating system. Before the VR eye-tracker began to record, a 5-point calibration would be performed first to make sure the eye movement of the participant is accurately recorded. After the participants have adjusted the VR eye-tracker to a proper, comfortable position with the assistance of the researcher, the researcher would then give instructions to guide participants in finishing the calibration without moving their heads too much.
Fig. 5. VR eye-tracker equipment.

Procedure
Similar to other eye movement experiments, the experiment of the present study started with precautions and instructions. The experimental instructions (in Chinese) were provided to the participants for them to know the procedure. Then, before the VR eye-tracker started recording, a 5-point calibration procedure must be performed so that the system can accurately track the eye behaviors. After assisting a participant in properly putting on the VR eye-tracker, the researcher then helped the participant complete the 5-point calibration procedure while reminding the participant to avoid moving his or her head as much as possible. While performing the calibration procedure, the participant was performing a saccadic eye movement; thus, the eyes moved quickly in the same direction and with the same amplitude. In order for the eyes to gaze on the stimulus, the central fossa of the retina was aligned at the same position. Sometimes, however, there were still some incidents of misalignment, especially during the first few times of fixation. Without binocular coordination, saccades caused different fixation scenarios and consequently affected the reading process. In such cases of divergence, binocular calibration must be re-established [60]. The VR eye-tracker calculated the coordinates on the screen corresponding to the rotation angle of the eyeballs according to the calibration result. After calibration, the official experiment can be performed. During the experiment, the participant can freely move his or her head. After every debugging task, the participant used the handheld controller to answer the question interactively for each program.

Data Collection
During the experiment procedure, the participant was asked to view C ++ codes via VR device. In addition to using the built-in eye-tracker to record the eye movement data, the participant needed to use the handheld controller to operate the experiment and choose the answers to the questions. After viewing all the questions in the experiment, the computer (notebook or tablet) connected to the VR eye-tracker will automatically export the eye movement data of the participants. The exported data were video files and text files. The video files (.wmv) were used to define the ROIs, and the text files (.txt) were the eye movement data of the participants. After the collection of eye movement data, the initial compilation of information was carried out with two supplementary tools of the eye movement analysis software—dynamic ROIs tool and fixation calculator tool—as shown in Fig. 6. The VR dynamic ROIs tool mainly helps the researchers define the ROI, whereas the fixation calculator tool automatically processes the ROI when the ROIs overlap. This can avoid potential errors in the data analysis process. The definition process steps of ROIs are as follows:

1) Import the recorded VR video file (.wmv) into the dynamic ROIs tool software.

2) Execute the recorded VR video file after importing.

3) Left-click and drag to the frame to be analyzed and release the mouse button to complete the definition of ROI.

4) Repeat the previous step according to the fixation calculator tool rules. After all the ROIs in the entire file have been defined, the ROI information can be utilized for analysis software.

Fig. 6. The system framework of VR eye tracker analysis software.

In this study, the eye movement data and analysis software were used to collect the participants’ visual attention data. The indicators below and SPSS for Windows 22 were then used to conduct a one-way ANOVA analysis in order to examine whether participants with different cognitive styles and of different gender show significant differences in visual attention when they were asked to debug VR programs. The time unit of eye movement indicators is millisecond (ms).

Latency of first fixation (LFF): The length of time that lapses since the participant first entered the defined ROI when the participant was debugging a program.

Duration of first fixation (DFF): The duration of fixation when the participant was debugging a program, starting from the participant’s first fixation in the ROI.

Total fixation duration(TFD): The total time during which the fixation point fell into an ROI when the program was debugged, including the time of first fixation and all succeeding fixations.

Fixation counts (FC): The total number of fixations wherein the fixation point fell into a certain ROI when the participant was debugging the code. The number of fixations reflects the importance of this particular ROI. A higher number indicates that this ROI is more important to the participant, or that it can provide more clues.



Data Output
Participantsare asked to view the C++ codes shown on the VR device during the experiment. Apart from recording the eye movement data with the eye-tracker, participants are also required to answer the questions by the handheld controller. After viewing all three questions, the notebook or tablet computer connected to the VR eye-tracker will automatically produce the eye movement data. The output files include a video clip and a text file. The video (.wmv) is used to define the ROIs, and the text file (.txt) includes the eye movement data.

Data Analysis
In the present study, the participants’ eye movement data was collected via the eye-tracking instrument and eye movement visualization analysis software. Differences in visual attention during the debugging task were highlighted for thorough analysis and comparison. In addition, the four common eye movement variables used in visual behavior analysis were the LFF, DFF, TFD, and FC in each ROI.
According to the collected eye movement data, the study sought to find out whether there is any difference in the participants’ visual attention to the VR programs. Thus, the present research raises the following three key hypotheses:

Hypothesis 1 (H1): The gaze sequence of participants would be in turn in ascending numerical order of ROIs during the debugging task for each program.

Hypothesis 2 (H2): The biggest gaze capacity of all types of ROI to hold the participants’ visual attention would be the type indicated by conditional or logical statements during the debugging task for each program.

Hypothesis 3 (H3): Different participants’ TFD and FC would exhibit significant differences during the debugging task for each program


Results and Discussion

In the experiment procedure of this study, the visual eye movements of a participant were instantaneously projected on the VR screen connected with a computer to determine the participant’s visual behavior. The VR eye-tracking snapshots of the experiment procedure are shown in Fig. 7.

Fig. 7. The snapshots screen of the experimental procedure.

Differences in Visual Attention among Different Cognitive Styles
This study analyzed the eye movement data of participants with different cognitive styles (16 FD and 16 FI participants) as they read questions of program design, in order to explore the differences in visual attention among individuals with different cognitive behaviors. The eye movement indicators analyzed and compared in this study are as follows: LFF, DFF, TFD, and FC of each ROI (Tables 1–6).
According to the eye movement data in Tables 1–3 and the results of ANOVA, as participants read the three programming questions, FD and FI individuals exhibited different visual behavior based on the LFF data as they followed different fixation sequences. Therefore, H1 was rejected based on the results of LFF. In addition, in terms of participants with different cognitive styles and viewing of ROIs, there appeared to be no significant differences in LFF. According to the DFF data, the FD and FI participants both hold the attention of ROI8 for the longest durations in the second question, whereas the relative capacity of all types of ROIs differed by cognitive style in the other two questions. Therefore, H2 was partly supported based on the results of DFF. The ROI to which the participant paid attention showed differences in visual attention. In addition, no significant difference can be observed in the DFF of participants with different cognitive styles.

Table 1. LFF and DFF of Q1 for each ROI among different cognitive styles (unit: ms)
LFF DFF
FI (n=16) FD (n=16) FI (n=16) FD (n=16)
ROI1 18738.57 22391 503.64 215.57
ROI2 23290.42 22049.21 338.83 304.57
ROI3 38514.67 21474 260.78 346.43
ROI4 33943 21583.25 117.71 270.5

Table 2. LFF and DFF of Q2 for each ROI among different cognitive styles (unit: ms)
LFF DFF
FI (n=16) FD (n=16) FI (n=16) FD (n=16)
ROI5 122030.9 143721.63 158.3 129.88
ROI6 132729.89 160340.13 184.67 161.5
ROI7 128485.83 163431.14 47.67 259.57
ROI8 93261.6 176898.6 526.6 692.8
ROI9 95666.67 197176.8 177.33 184
ROI10 142419.88 159275.67 397 202.67
ROI11 115511.83 166100 213.17 361.67
ROI12 113656.6 167863.5 175.6 176.5

Table 3. LFF and DFF of Q3 for each ROI among different cognitive styles (unit: ms)
LFF DFF
FI (n=16) FD (n=16) FI (n=16) FD (n=16)
ROI13 182831.91 292716.67 314.55 159.67
ROI14 225174.25 280495.25 145.13 191.75
ROI15 230607.75 362115.5 197.75 351.5
ROI16 225711.4 362067 187.2 384.5
ROI17 215834.8 246518 389 128

According to the data in Tables 4–6 as well as the eye movement data analyzed by ANOVA, individuals with different cognitive styles exhibited no significant differences in TFD in the debugging behavior of the three questions. Therefore, H3 was rejected based the results of TFD. Based on the comparison of TFD in different ROIs, however, individuals with different cognitive styles were found to have exhibited inconsistent data for all ROIs in the second programming question, but the TFD of FI participants was longer than that of the FD participants in all the other ROIs (in the other two questions). According to the ANOVA results, in the debugging behavior of these three questions, there was no significant difference in the FC of individuals with different cognitive styles. Therefore, H3 was rejected. According to the comparison of ROIs in all programming questions, however, the FC of FI participants was higher than that of the FD ones.

Table 4. TFD and FC of Q1 for each ROI among different cognitive styles (unit: ms)
TFD FC
FI (n=16) FD (n=16) FI (n=16) FD (n=16)
ROI1 283084 121048 3007 1193
ROI2 86108 71864 1155 690
ROI3 78935 48222 950 672
ROI4 63702 31010 651 370

Table 5. TFD and FC of Q2 for each ROI among different cognitive styles (unit: ms)
TFD FC
FI (n=16) FD (n=16) FI (n=16) FD (n=16)
ROI5 89592 54259 962 537
ROI6 55331 21116 537 207
ROI7 7342 12730 123 99
ROI8 20141 37987 270 183
ROI9 47853 8276 551 133
ROI10 86106 5958 723 122
ROI11 85066 8593 609 181
ROI12 20772 8351 222 135

Table 6. TFD and FC of Q3 for each ROI among different cognitive styles (unit: ms)
TFD FC
FI (n=16) FD (n=16) FI (n=16) FD (n=16)
ROI13 86185 26597 877 259
ROI14 41175 15930 440 111
ROI15 11980 11699 177 79
ROI16 25596 24932 202 189
ROI17 42024 17271 339 302

Differences in Visual Attention by Gender
This study analyzed the eye movement data of participants by gender (22 men and 10 women) as they read three programming questions in order to explore the differences in visual attention among individuals with different cognitive behaviors. The eye movement metrics analyzed and compared in this study are as follows: LFF, DFF, TFD, and FC of each ROI (Tables 7–12).
According to the eye movement data in Tables 7–9, men and women followed different sequences of fixation as they read the three programming questions based on the LFF data. Therefore, H1 was rejected based on the results of LFF. According to the DFF data, man and woman participants both hold the attention of ROI8 for the longest durations in the second question, whereas the relative capacity of all types of ROIs differed in visual attention by gender in the other two questions. In the DFF of participants by gender, no significant difference can be observed by the analysis of ANOVA except for ROI11 in the second question ($F_{(1, 30)}$=8.81, p<0.01, $η_p^2$=0227).

Table 7. LFF and DFF of Q1 for each ROI among different gender (unit: ms)
LFF DFF
Man (n=22) Woman (n=10) Man (n=22) Woman (n=10)
ROI1 19900.67 22557.14 398.48 243
ROI2 29503.5 11611.8 435.25 136.6
ROI3 49536.38 12582.38 267.38 329.13
ROI4 50091.25 17652.71 142.75 190.71

Table 8. LFF and DFF of Q2 for each ROI among different gender(unit: ms)
LFF DFF
Man (n=22) Woman (n=10) Man (n=22) Woman (n=10)
ROI5 129758.6 141234.33 159.73 75.33
ROI6 143697.54 152305.5 210 56
ROI7 165021.86 126630 191.71 126.83
ROI8 135363.6 134796.6 739.2 480.2
ROI9 154257.67 126867.6 250.5 96.2
ROI10 166940.5 134500.67 279 378.5
ROI11 126959 135082.33 90.67 348.67
ROI12 102171.5 139933.4 327.5 115.2

Table 9. LFF and DFF of Q3 for each ROI among different gender (unit: ms)
LFF DFF
Man (n=22) Woman (n=10) Man (n=22) Woman (n=10)
ROI13 210434.08 292716.67 294.17 204.5
ROI14 266080.25 280495.25 214.88 52.25
ROI15 350132.67 362115.5 383.33 114.67
ROI16 336959.67 362067 276 219.25
ROI17 253142 246518 152.5 442

According to the eye movement data in Tables 10–12 and the results of ANOVA, no significant difference in the TFD data can be observed as participants of different gender read the three program questions. Therefore, H3 was rejected based on the results of TFD. According to the comparison of TFD of ROIs in the three programming questions, however, man participants had longer FD than woman ones except for ROI12 in the second question and ROI17 in the third question, where woman participants had longer FD than their man counterparts.

Table 10. TFD and FC of Q1 for each ROI among different gender (unit: ms)
TFD FC
Man (n=22) Woman (n=10) Man (n=22) Woman (n=10)
ROI1 369683 34449 3743 457
ROI2 96899 61073 943 902
ROI3 37227 89930 346 1276
ROI4 31999 62713 226 795

Table 11. TFD and FC of Q2 for each ROI among different gender (unit: ms)
TFD FC
Man (n=22) Woman (n=10) Man (n=22) Woman (n=10)
ROI5 136462 7389 1410 89
ROI6 63142 13305 558 186
ROI7 13337 6735 114 108
ROI8 48525 9603 287 166
ROI9 34547 21582 343 341
ROI10 72383 19681 501 344
ROI11 66161 27498 382 408
ROI12 9423 19700 63 294

Table 12. TFD and FC of Q3 for each ROI among different gender (unit: ms)
TFD FC
Man (n=22) Woman (n=10) Man (n=22) WOman (n=10)
ROI13 98369 14413 911 225
ROI14 47066 10039 358 193
ROI15 11890 11789 63 193
ROI16 39570 10958 207 184
ROI17 25935 33360 172 469
In the FC of different gender, except for ROI3 (F(1, 30)=8.59, p=0.01, ηp2=0223) in the first program, there appeared to be no significant difference. Therefore, H3 was partly supported based on the results of FC. According to the comparison of FCs in all ROIs, however, man and woman participants yielded a less consistent set of results.


Conclusion and Future Work

This study used VR technology to design a virtual classroom scene and the 3D rendering of C++ source codes. Through the integrated application of eye-tracking technology to VR technology and by analyzing the eye movement data, the study explored the visual behavior as participants viewed the C++ source codes in the virtual classroom scene. Although VR and eye-tracking technologies were not new, participants could utilize the integrated VR eye-tracker to conduct dynamic behavior analysis as they immersed themselves in a VR classroom scene. The results of this study can serve as an important reference for the development of the virtual classroom as follows:
According to the LFF data, participants with different cognitive styles showed different sequences of fixation, and there was no significant difference in viewing ROIs between the FI and FD groups. Based on the DFF data, there was partly significant difference in participants’ visual fixation when viewing the ROIs of the three programs between the FI and FD groups.
According to the TFD data, there was no significant difference in the TFD of participants with different cognitive styles. In terms of ROI, participants with different cognitive styles yielded inconsistent data in the second programming question; in all the other ROIs (in the other two questions), the fixation duration of FI participants was consistently longer than that of the FD ones. According to the FC data, there was no significant difference in participants with different cognitive styles. Nonetheless, the comparison of ROI showed that FI participants had higher FC than FD ones in each ROI in the three program questions.
According to the LFF and DFF data, there was partly significant difference in the participants’ visual fixation when viewing the ROIs of the three programs by gender.
Based on the comparison of TFD of ROIs in the three program questions, man participants were found to have had longer FD than woman ones. Moreover, based on FC data, less consistent comparison results were yielded between man and woman participants when viewing the ROIs of the three programs.
In the present study, we developed a self-made, low-budget VR eye-tracker for eye movement data visual analytics because there was lack of market-oriented equipment for the integration of eye tracking into VR HMD. However, the low-cost VR eye-tracker for interaction purposes was adequately used in this study. In the future, a professional VR eye-tracker will become a means of exploring the benefits of advanced analysis techniques for real-time analysis in the field of VR environment.


Author’s Contributions

Conceptualization, JCH, CCW.Funding acquisition, JCH, CCW.Investigation and methodology, JCH, CCW.Resources, CCW. Supervision, JCH, CCW. Writing of the original draft, JCH,CCW. Writing of the review and editing, CCW.Software, CCW. All authors read and approved the final manuscript.


Funding

This work was supported by the Ministry of Science and Technology, Taiwan (No. MOST-107-2511-H-309-001 and MOST-109-2221-E-025-009).


Competing Interests

The authors declare that they have no competing interests.


Author_Biography

BIOGRAPHY
Jason C. Hung is an Associate Professor of Department of Computer Science and Information Engineering at National Taichung University of Science and Technology, Taiwan, ROC. His research interests include Multimedia System, e-Learning, Affective Computing, Artificial Intelligence and Social Computing. From 1999 to date, he was a part time faculty of the Computer Science and Information Engineering Department at Tamkang University. Dr. Hung received his BS and MS degrees in Computer Science and Information Engineering from Tamkang University, in 1996 and 1998, respectively. He also received his Ph.D. in Computer Science and Information Engineering from Tamkang University in 2001. Dr. Hung participated in many international academic activities, including the organization of many international conferences. He is the founder of International Conference on Frontier Computing. He served as Hon Treasurer of IET Taipei LN. In April of 2014, he was elected as Fellow of the Institution of Engineering and Technology (FIET). He was elected as vice chair of IET Taipei LN in Nov. 2014. From June 2015, He is Editor-in-Chief of International Journal of Cognitive Performance Support and serves as deputy editor of International Journal of Social and Humanistic Computing.

Chun-Chia Wang was born in 1966. He received his M.S. and Ph.D. degrees in Computer Science from Tamkang University, Taiwan, in 1994 and 1997, respectively. He is now a Professor in Department of Computer Science and Information Engineering at Chang Jung Christian University, Tainan city, Taiwan. He was a Chairman of Department of Information Management and Director of Computer Center at Taipei City University of Science and Technology. His research interests include e-Learning, mobile learning, multimedia computing and networking, and eye tracking technology


References

[1] J. Jerald, The VR Book: Human-Centered Design for Virtual Reality. New York, NY: ACM Books, 2016.
[2] S. S. Liaw, H. M. Huang, and C. M. Lai, “A study of virtual reality and problem-based learning applied in mobile medical education,” Chinese Journal of Science Education, vol. 19, no. 3, pp. 237-256, 2011.
[3] E. D. Gagne, C. W. Yekovich, and F. R. Yekovich, The Cognitive Psychology of School Learning, 2nd ed. New York, NY: HarperCollins College Publishers, 1993.
[4] L. Y. Cheng, Cognitive Psychology: Theories and Application. Taipei, Taiwan: Wu-Nan Culture Enterprise, 1993.
[5] S. Y. Chen, G. D. Magoulas, and D. Dimakopoulos, “A flexible interface design for web directories to accommodate different cognitive styles,” Journal of the American Society for Information Science and Technology, vol. 56, no. 1, pp. 70-83, 2005.
[6] P. Wang, W. B. Hawk, and C. Tenopir, “Users' interaction with World Wide Web resources: an exploratory study using a holistic approach,” Information Processing and Management, vol. 36, no. 2, pp. 229-251, 2000.
[7] A. Durndell and Z. Haag, “Computer self efficacy, computer anxiety, attitudes towards the Internet and reported experience with the Internet, by gender, in an East European sample,” Computers in Human Behavior, vol. 18, no. 5, pp. 521-535, 2002.
[8] S. Cassidy and P. Eachus, “Developing the computer user self-efficacy (CSE) scale: investigating the relationship between computer self-efficacy, gender and experience with computers,” Journal of Educational Computing Research, vol. 26, no. 2, pp. 133-153, 2002.
[9] C. S. Ong and J. Y. Lai, “The gender differences in perception and relationships among dominants of e-Learning acceptance,” Computers in Human Behavior, vol. 22, no. 5, pp. 816-829, 2006.
[10] L. J. Francis, “Measuring attitude toward computers among undergraduate college students: the affective domain,” Compute and Education, vol. 20, no. 3, pp. 251-256, 1993.
[11] V. Galpin, I. Sanders, H. Turner, and B. Venter, “Computer self-efficacy, gender, and educational background in South Africa,” IEEE Technology and Society Magazine, vol. 22, no. 3, pp. 43-48, 2003
[12] Y. T. Lin, C. C. Wu, T. Y. Hou, Y. C. Lin, F. Y. Yang, and C. H. Chang, “Tracking students’ cognitive processes during program debugging: an eye-movement approach,” IEEE Transactions on Education, vol. 59, no. 3, pp. 175-186, 2016.
[13] A. L. Carrillo and J. A. Falgueras, “Proposal and testing goals-guided interaction for occasional users,” Human-centric Computing and Information Sciences, vol. 10, article no. 4, 2020. https://doi.org/10.1186/s13673-020-0209-2
[14] H. C. Chen, H. D. Lai, and F. C. Chiu, “Eye tracking technology for learning and education,” Journal of Research in Education Sciences, vol. 55, no. 4, pp. 39-68, 2010.
[15] K. Rayner, “Eye movements in reading and information processing: 20 years of research,” Psychological Bulletin, vol. 124, no. 3, pp. 372-422, 1998.
[16] D. Vergara, M. Lorenzo, and M. P. Rubio, “Virtual environments in materials science and engineering: the students’ opinion,” in Materials Science and Engineering: Concepts, Methodologies, Tools, and Applications. Hershey, PA: IGI Global, 2017, pp. pp. 1465-1483.
[17] N. Sala, “Multimedia and VR in architecture and in engineering education,” in Proceedings of the 2nd WSEAS/IASME International Conference on Educational Technologies, Bucharest, Romania, 2006, pp. 18-23.
[18] M. Mihelj, D. Novak, and S. Begus, Virtual Reality Technology and Applications. Dordrecht, The Netherlands: Springer, 2014.
[19] B. Temkin, E. Acosta, P. Hatfield, E. Onal, and A. Tong, “Web-based three-dimensional virtual body structures: W3D-VBS,” Journal of the American Medical Informatics Association, vol. 9, no. 5, pp. 425-436, 2002.
[20] H. Hoffman, M. Murray, R. Curlee, and A. Fritchle, “Anatomic visualizeR: teaching and learning anatomy with virtual reality,” in Information Technologies in Medicine: Medical Simulation and Education I. Hoboken, NJ: John Wiley & Sons, 2001, pp. 205-218.
[21] H. Brenton, J. Hernandez, F. Bello, P. Strutton, S. Purkayastha, T. Firth, and A. Darzi, “Using multimedia and Web3D to enhance anatomy teaching,” Computers & Education, vol. 49, no. 1, pp. 32-53, 2007.
[22] G. Bang, J. Yang, K. Oh, and L. Ko, “Interactive experience room using infrared sensors and user’s poses,” Journal of Information Processing Systems, vol., 13, no. 4, pp. 876-892, 2017.
[23] L. Daghestani, “The design, implementation and evaluation of a desktop virtual reality for teaching numeracy concepts via virtual manipulatives,” Ph.D. dissertation, University of Huddersfield, Huddersfield, UK, 2013.
[24] H. W. Wang and S. C. Lin, “Investigate the effect of learning motivation and performance based on experimental learning cycle by using immersive virtual reality device,” in Proceedings of the 21st Global Chinese Conference on Computers in Education,Beijing, China, 2017, pp. 798-801.
[25] M. Siegrist, C. Y. Ung, M. Zank, M. Marinello, A. Kunz, C. Hartmann, and M. Menozzi, “Consumers’ food selection behaviors in three-dimensional (3D) virtual reality,” Food Research International, vol. 117, pp. 50-59, 2019.
[26] F. Pierre, F. Zhao, and A. Koufakou, “Learning programming in virtual reality environments,” in HCI in Games. Cham, Switzerland: Springer, 2020, pp. 448-457. https://doi.org/10.1007/978-3-030-50164-8_33
[27] F. J. Gallego-Duran, C. Villagra-Arnedo, F. Llorens-Largo, and R. Molina-Carmona, “PLMan: a game-based learning activity for teaching logic thinking and programming,” International Journal of Engineering Education, vol. 33, no. 2B, pp. 807-815, 2017.
[28] R. Riding and I. Cheema, “Cognitive styles: an overview and integration,” Educational Psychology, vol. 11, no. 3-4, pp. 193-215, 1991.
[29] G. A. Kelly, The Psychology of Personal Constructs. New York, NY: Norton, 1955.
[30] U. Neisser, Cognitive Psychology. Englewood Cliffs, NJ: Prentice-Hall, 1967.
[31] S. Messick, “Personality consistencies in cognition and creativity,” in Individuality in Learning. San Francisco, CA: Jossey-Bass Publishers, 1976.
[32] H. A. Witkin, C. A. Moore, D. R. Goodenough, and P. W. Cox, “Field-dependent and field-independent cognitive styles and their educational implications,” Review of Educational Research, vol. 47, no. 1, pp. 1-64, 1977.
[33] L. L. Hsu, “The effect of virtual reality distance learning transfer with different cognitive style,” Ph.D. dissertation, National Kaohsiung Normal University, Kaohsiung City, Taiwan, 2000.
[34] M. C. Liu, “A study on the process of problem-solving in object-based authoring system programming,” Journal of National Hualien Teachers College, vol. 11, pp. 205-230, 1999.
[35] Y. C. Hung, “A comparative study of the personality characteristic between college students in the object-oriented programming course,” Journal of Research on Elementary and Secondary Education, vol. 4, pp. 127-142, 1998.
[36] C. S. Chang, C. M. Chen, and Y. C. Lin, “A visual interactive reading system based on eye tracking technology to improve digital reading performance,” in Proceedings of the 7th International Congress on Advanced Applied Informatics (IIAI-AAI), Yonago, Japan, 2018, pp. 182-187.
[37] C. M. Chen, J. Y. Wang, and Y. C. Lin, “A visual interactive reading system based on eye tracking technology to improve digital reading performance,” The Electronic Library, vol. 37, no. 4, pp. 680-702, 2019.
[38] M. N. Kholid, P. S. Hamida, L. N. Pradana, and S. Maharani, “Students’ critical thinking depends on their cognitive style,” International Journal of Scientific & Technology Research, vol. 9, no. 1, pp. 1045-1049, 2020.
[39] Y. Yusnaini, B. Burhanudin, and A. Hakiki, “Field dependence cognitive and learner aptitudes: experimental study on accounting student performance,” PalArch’s Journal of Archaeology of Egypt/Egyptology, vol. 17, no. 6, pp. 11346-11362, 2020.
[40] A. Jackson and P. Kutnick, “Groupwork and computers: task type and children's performance,” Journal of Computer Assisted Learning, vol. 12, no. 3, pp. 162-171, 1996.
[41] V. Makrakis and T. Sawada, “Gender, computers and other school subjects among Japanese and Swedish students,” Computers & Education, vol. 26, no. 4, pp. 225-231, 1996.
[42] W. W. F. Lau and H. K. Yuen, “Exploring the effects of gender and learning styles on computer programming performance: implications for program pedagogy,” British Journal of Educational Technology, vol. 40, no. 4, pp. 696-712, 2009.
[43] C. E. Lai, “Development and research on a Chinese visual programming system for children,” Ph.D. dissertation, National Taipei Teachers’ College, Taipei City, Taiwan, 2004.
[44] S. Papavlasopoulou, K. Sharma, and M. N. Giannakos, “Coding activities for children: coupling eye-tracking with qualitative data to investigate gender differences,” Computers in Human Behavior, vol. 105, article no. 105939, 2020. https://doi.org/10.1016/j.chb.2019.03.003
[45] M. E. Crosby and J. Stelovsky, “Subject differences in reading of computer algorithms,” in Designing and Using Human-Computer Interfaces and Knowledge Based Systems. Amsterdam, The Netherlands: Elsevier, 1989, pp. 137-144.
[46] M. E. Crosby and J. Stelovsky, “How do we read algorithms? A case study,” Computers, vol. 23, no. 1, pp. 24-35, 1990.
[47] B. Sharif, M. Falcone, and J. I. Maletic, “An eye-tracking study on the role of scan time in finding source code defects,” in Proceedings of the Symposium on Eye Tracking Research and Applications, Santa Barbara, CA, 2012, pp. 381-384.
[48] C. Kim, J. Yuan, L. Vasconcelos, M. Shin, and R. B. Hill, “Debugging during block-based programming,” Instructional Science, vol. 46, no. 5, pp. 767-787, 2018.
[49] C. Proctor, “Measuring the computational in computational participation: debugging interactive stories in middle school computer science,” in Proceedings of the 13th International Conference on Computer Supported Collaborative Learning (CSCL), Lyon, France, 2019, pp. 104-111.
[50] S. Xu and V. Rajlich, “Cognitive process during program debugging,” in Proceedings of the 3rd IEEE International Conference on Cognitive Informatics, Victoria, Canada, 2004, pp. 176-182.
[51] W. E. Wong, Y. Qi, L. Zhao, and K. Y. Cai, “Effective fault localization using code coverage,” in Proceeding of the 31st Annual International Computer Software and Applications Conference, Beijing, China, 2007, pp. 449-456.
[52] E. Lahtinen, K. Ala-Mutka, and H. M. Jarvinen, “A study of the difficulties of novice programmers,” ACM SIGCSE Bulletin, vol. 37, no. 3, pp. 14-18, 2005.
[53] R. Bednarik, “Expertise-dependent visual attention strategies develop over time during debugging with multiple code representations,” International Journal of Human-Computer Studies, vol. 70, no. 2, pp. 143-155, 2012.
[54] R. Bednarik, N. Myller, E. Sutinen, and M. Tukiainen, “Program visualization: comparing eye-tracking patterns with comprehension summaries and performance,” in Proceedings of the 18th Annual Workshop of the Psychology of Programming Interest Group, Brighton, UK, 2006, pp. 68-82.
[55] R. Bednarik and M. Tukiainen, “Temporal eye-tracking data: Evolution of debugging strategies with multiple representations,” in Proceedings of the 2008 Symposium on Eye Tracking Research & Applications, Savannah, GA, 2008, pp. 99-102.
[56] U. Obaidellah, M. Al Heak, and P. C. H. Cheng, “A survey on the usage of eye-tracking in computer programming,” ACM Computer Surveys, vol. 51, no. 1, pp. 1-58, 2018.
[57] J. C. Sun and K. Y. Hsu, “A smart eye-tracking feedback scaffolding approach to improving students’ learning self-efficacy and performance in a C programming course,” Computers in Human Behavior, vol. 95, pp. 66-72, 2019.
[58] R. Bednarik, C. Schulte, L. Budde, B. Heinemann, and H. Vrzakova, “Eye-movement modeling examples in source code comprehension: a classroom study,” in Proceedings of the 18th Koli Calling International Conference on Computing Education Research, Koli, Finland, 2018, pp. 1-8.
[59] H. A. Witkin, P. K. Oltman, E. Raskin, and S. A. Karp, A Manual for the Group Embedded Figures Test. Palo Alto, CA: Consulting Psychologists Press, 1971.
[60] M. Vernet and Z. Kapoula, “Binocular motor coordination during saccades and fixations while reading: a magnitude and time analysis,” Journal of Vision, vol. 9, article no. 2, 2009. https://doi.org/10.1167/9.7.2

About this article
Cite this article

Jason C. Hung1,* and Chun-Chia Wang2, The Influence of Cognitive Styles and Gender on Visual Behavior During Program Debugging: A Virtual Reality Eye Tracker Study, Article number: 11:22 (2021) Cite this article 5 Accesses

Download citation
  • Recived10 September 2020
  • Accepted27 April 2021
  • Published30 May 2021
Share this article

Anyone you share the following link with will be able to read this content:

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords