- Merlin Sunley
The Evolving Process of Scale Development in Games User Research: A Comparative Study

In their 10 year review of research Computer Human Interaction in Play Carter et al., (2014) characterize four paradigms within which PCI (Player Computer Interaction) can be categorized:
Operative research uses knowledge from game or play study to enact change within the world e.g. through encouraging education or exercise. Epistemological research uses knowledge garnered from PCI to critically examine the limits of human knowledge on the whole (Lankoski & Holopainen, 2017). Ontological game research is concerned with the formal study of the components and variations in games which make up their rules and mechanics (Aarseth & Grabarczyk, 2018). Practice based game research is concerned with experiences and practices that emerge from game, toy or technological interactions (Carter et al., 2014).
This article aims to compare and evaluate evaluate the research presented in the following three articles and show some of the ways in which Game User Research (GUR) has advanced since their publication:
Development and validation of the player experience (PX) inventory: A scale to measure player experiences at the level of functional and psychosocial consequences by Abeele et al. (2020) abbreviated as PXI
Game Experience Questionnaire: development of a self-report measure to assess the psychological impact of digital games by IJsselsteijn et al. (2007) abbreviated GEQ.
Measuring Enjoyment of Computer Game Play by Fang et al. (2010) abbreviated MEGP.
All three papers focus on developing research instruments for measuring subjective gameplay experience, in each case the resulting instrument is a survey. However the rationale behind the creation of each instrument differs in a number of key ways.
IJsselsteijn et al.'s (2007) aims were towards creating a comprehensive self-report instrument characterising the multi-dimensional experience of digital gameplay. While Fang et al.'s (2010) paper focused on measuring the enjoyment of computer games. Despite the differing scope of both papers they fall within the practice research paradigm.
Abeele et al.’s (2020) goal on the other hand was to develop an instrument enabling GUR researchers to unpick how low level game design choices are understood and perceived by players on an emotional level. They do this by combining ontological research, in this case measuring PX as a result of “Functional Consequences”, with practice research in an attempt to understand and apply how such design choices affect emotional experiences or “Psychosocial Consequences”.
Research Phase
During the research phase perhaps most contrasting is the work carried out for the GEQ. Having secured €2,000,000 funding from the EU the resources available to IJsselsteijn et al. were significantly greater than those of the other researchers. This meant they could mobilise a variety of research teams to independently perform quantitative and qualitative research in support of their effort.
In the introduction to the GEQ article, the authors state explicitly that in order to gain a comprehensive perspective on digital games experience a “multi-method”, “multi-measure” approach must be taken (IJsselsteijn et al. 2007). Thus the theoretical foundations of the project are the large body of research performed under the umbrella of the FUGA project. A considerable amount of this research involved biometric measures Salminen & Ravaja, 2007; Ravaja, 2009, which Drachen et al., (2018) defined as methods which provide real time measurement of player responses (EEG, Eye tracking etc). Ultimately IJsselsteijn et al. settle on seven constructs: competence, sensory & imaginative immersion, flow, tension, challenge, negative affect and positive affect.
The work carried out by Fang et al. used Nabi and Krcmar’s (2004) Tripartite Model of Media Enjoyment as its theoretical basis. The rationale behind this was the lack of extant work on the enjoyment of computer game play. According to Fang et al. (2010) existing research did not seek to measure the player's subjective experience of enjoyment but was focused instead on UX (user experience) during gameplay.
The Tripartite Model defines enjoyment as a three dimensional construct consisting of: affect, cognition and behaviour. The first, affect, focuses on empathy and positive or negative moods. Cognition, considers judgements of story coherence, realism or message, whilst the third behavioural dimension considers player intent and actions during viewing (Nabi and Krcmar, 2004). Fang et al. (2010) point out that the tripartite model was not supported by empirical validation at the time of publication.
Abeele et al.’s (2020) focus on two lenses through which we can conceptualise PX. The first being PX as need fulfilment or psychological state attainment. Abeele et al. (2020) explicate research from Zillman, 1988; Zillmann, 2015; Deci & Ryan, 1985; amongst others showing that motivation to play is derived from a desire to fulfil these needs and ‘good’ PX comes from satisfying those needs.
The second lens sees humans as hedonists with the attainment of desired psychological states stemming from gameplay as the goal. Concepts such as Csikszentmihalyi’s flow, immersion as defined by Brown & Cairns, (2004) and Jennett et al (2008), presence as explained by Takatalo et al., (2009) are discussed among other desirable states. Abeele et al. draw attention to the fact that there is still much debate within academia regarding which states form the root of PX and how these states are aligned.
While the goals converge on developing a tool for analysing player experiences the theoretical foundations of each article differ greatly. With hindsight, Abeele et al. are able to draw upon over a decade of qualitative and quantitative research allowing a critical view of previous efforts in measuring PX.
This leads them to utilising a synthesis of the abovementioned theories with Means End Theory (ME) (Gutman, 1982), which advocates focusing on the unfolding of consequences during product/user interaction to understand PX, and the Mechanics Dynamics Aesthetics framework (MDA) (Hunicke et al., 2004) which puts an emphasis on causative links between game design and PX. This allows a direct theoretical link between the Functional Consequences and Psychosocial Consequences outlined in their aims.
Instrument development Phases
All three articles take varied approaches to instrument development but all broadly follow a three stage process for their research; exploratory research, scale development and testing. Due to the complexity of the analyses involved, focus will lean towards briefly comparing and contrasting methods rather than thorough explanations of statistical methodologies used. For detailed explanations of the analysis methods references have been provided.
Phase One: Exploratory Research
In the initial research IJsselsteijn et al. (2007) gathered data via focus groups. Reasons for using this methodology were twofold: firstly to create an extended list of items forming the basis of the GEQ. Secondly it provided researchers deeper insight into the play experiences of different gamer types e.g. frequent vs. infrequent players. In total 12 undergraduate and graduate students participated in three focus groups.
Expert meetings were then arranged. The aim was combining insight from earlier theoretical explorations and focus groups into a ‘concept map’ of PX elements. The meeting comprised five researchers including psychologists and experts in UX instrument development. Personal reflections on what each individual saw as salient gaming experiences were written on post-its. Results from the focus group meetings and theoretical considerations were then added and organised onto a whiteboard according to their similarity and centrality. This formed their ‘concept map’, later to be refined into the seven construct scale.
To construct the first draft of their instrument Fang et al. used Moore and Benbasat’s scale development approach (1991). Using the tripartite model as a starting point a team consisting of information systems research faculty and doctoral students engaged in brainstorming sessions to create items for the three constructs.
The second stage of the process involved contacting 20 professional game developers and designers via email to review the initial item pools. They were given the item pools and asked to keep, revise or drop each item. After review changes were made leading to an updated version of the instrument.
Abeele et al. (2020) used DeVellis’, (2012) scale development method. Initial item generation took place over two studies. Study 1.) A group of 31 GUR experts reviewed an initial selection of constructs and items and devised the theoretical model. Study 2.) Using a Q-sorting procedure, 33 GUR experts provided feedback on the model and based on this work a survey was created.
Whilst all teams conduct primary research to generate their scale items the GEQ researchers were the only group to use lay people during focus group testing. Gibbs, (1997) pointed out issues with focus group methodology include less control over resulting data and an inability to assume individuals are putting forth a definitive view. Conversely, the other two studies utilised expert review in both stages of exploratory research with Abeele et al. (2020) using GUR experts and Fang et al. (2010) information systems research faculty and doctoral students in phase 1.), and professional game developers and designers in phase 2.).
Significant differences in sample sizes are also seen amongst all studies. In order to obtain more accurate mean values and a smaller margin of error, as large a sample as possible is preferable (Zamboni, 2019). Of the 20 prospective participants Fang et al. contacted just 6 responded and despite representing industry leading firms the lack of response could call into question the validity of the research.
Phase Two: Scale Development
During this development phase each team utilised different methodologies to design and assess their respective instruments. Both GEQ and PXI teams performed surveys in order to generate data for their primary research. The PXI team provided a paper and pencil survey to 237 students attending a summer school, while the GEQ project recruited via the internet obtaining 705 respondents. Both teams then performed data cleansing. Abeele et al. used a process laid out by Carpenter (2017) and Gaskin (2013) screening for suspicious response patterns, disengagement and others. While IJsselsteijn et al. screened for incompleteness and inconsistency amongst other elements.
Similarly Abeele et al. and IJsselsteijn et al. both use Exploratory Factor Analysis to investigate the structure of their instruments. They differ however in that Abeele et al. perform reliability testing prior to undertaking factor analysis rather than in subsequent studies. Factor analysis constitutes a method for representing variability among observed variables and their joint unobserved variables (covariance) (DiStefano et al., 2009) and is an important step in scale development. There are two types: exploratory (EFA) and confirmatory (CFA) (OARC UCLA, n.d.). EFA is helpful in initially refining down large numbers of items in a scale (DiStefano et al., 2009), whilst CFA is used to verify the psychometric properties of already developed scales (OARC UCLA, n.d.-b). Abeele et al. perform EFA and CFA whereas IJsselsteijn et al. explain that despite earlier empirical and theoretical explorations their latent model of game experience lacked substantive theoretical soundness to warrant CFA. Arguably, this empirical work by IJsselsteijn et al. and subsequent researchers has strengthened the theoretical foundations underpinning the field, combined with current structural modelling and analysis software providing stronger academic grounds for Abeele et al.’s usage of the technique.
Fang et al. by contrast used this stage to further refine their scale using exploratory (ECS) and then confirmatory card sorting (CCS) Card sorting is a qualitative technique in which you provide participants with cards, in this case scale items and ask them to sort them in a way that makes sense personally. Fang et al. stated goals during the ECS phase was to assess construct validity of the instrument and to identify new or ambiguous items. Also because the tripartite model was adopted from a tangential field and empirically untested for video game PX this phase was added to discover new enjoyment constructs outside of the three chosen initially. The goal of the CCS stage was to assess the validity of the instrument resulting from the prior stage and further refine the scale.
A focus group then followed in which the participants discussed their choices. A measure of the classification reliability and item validity developed for the research called ‘hit rate’ was then measured. Hit rate was the percentage of judges who placed an item into the correct pre-defined ‘target’ construct. A higher hit rate indicated a higher level of inter-judge agreement. Alongside hit rate Fang et al. used Cohen’s Kappa (Cohen, 1960) to measure inter-judge agreement and placement ratio (Moore and Benbasat, 1991) was computed to measure classification reliability and item validity.
Phase Three: Scale Testing
The goal at this stage for all researchers was to assess scale reliability and ensure construct validity. Abeele et al. employ a battery of statistical and primary research methods over 4 complex studies (4.), (5.), (6.), & (7.).
(4.) A survey was conducted to perform a multi-group invariance CFA (Fischer & Karl, 2019).
(5.) They then gathered data from game evaluations and playtests in which rather than using delayed recall players completed the PXI immediately post-game. Performing an additional CFA combining data from both delayed recall and post-game studies.
(6.) To validate model fit and verify validity the data of studies (3.), (4.) and (5.) were combined and CFA carried out. Then convergent and discriminant validity were assessed (Glen, 2018).
Finally (7.) an assessment of instrument criterion validity was performed using a 2x2 repeated measures study. The researchers built two custom games and presented them with and without visual elaboration. Participants were then asked to fill out alternative measurement scales as well as the PXI. They then mapped constructs from PXI onto other scales to find correlations. A mediation analysis combining datasets from study (3.), (4.) and (5.) was conducted to further probe the underlying instrument model in order to prove their hypothesis that the effect of functional consequences on enjoyment is mediated by psychosocial consequences.
Contrastingly, Fang et al. to test the fourth and final iteration of their instrument conducted an email survey on students at a U.S. Midwest University obtaining 307 participant responses. When performing the formal data analysis they used Cronbach’s alpha values (a popular model of internal consistency) (Cronbach, 1951) used by the PXI and GEQ teams in earlier phases. They also conducted a factor analysis in order to further eliminate items.
IJsselsteijn et al described their explorations of the GEQ in this testing phase as ‘preliminary’ and part of ongoing planned research. However they do perform basic sensitivity and discriminant validity methods on the GEQ scales based on data gathered in previous research. Calculating average scores and response ranges reported on bivariate correlations.
Conclusion
Compared to the GEQ and MEGP, clearly Abeele et al. performed considerably more and more-complex statistical analyses. It is up for debate whether the current use of ever more sophisticated software and increasingly powerful computing or the maturation of the GUR field over the last decade contributes to the rigour with which they were able to develop the PXI. It is likely a combination of both. They do however state that scale development is an incremental and evolving process and suggest a number of ways in which the instrument can be further tested. And due to the relative recency of the research it remains to be seen whether the PXI will see widespread adoption similar to the GEQ.
Additionally, in the time since its creation the GEQ has come under direct scrutiny due to its popularity as a measurement tool. Law et al., (2018) in a systematic analysis of its employment in contemporary GUR, found no evidence for the validity of the 7-factor structure IJsselsteijn et al. (2007) postulated. Johnson et al., (2018) concur with these findings in that they could not find “clear support for the proposed factor structure of the GEQ''. This goes some way to show that perhaps a dearth of funding and empirical research cannot always be seen as a guarantee of scale validity.
Fang et al., (2013) on the other hand later discarded the tripartite model and converged on agreement with the GUR community in the use of Flow as a theoretical basis for instrument development. And, although this research differs from Fang et al. (2010), the tripartite model has largely been ignored in contemporary GUR in favour of more bespoke methodologies as evidenced by Abeele et al (2020). Also, whilst the scope of their more contemporary work was narrowly centred on what has been found to be just one dimension of game enjoyment (flow) the original research (Fang et al. 2010) has contributed somewhat to the corpus of GUR related literature (Caroux et al., 2015); (Nah et al., 2014); (Perski et al., 2016).
References
Aarseth, E., & Grabarczyk, P. (2018). An Ontological Meta-Model for Game Research. DiGRA ’18 - Proceedings of the 2018 DiGRA International Conference: The Game Is the Message. https://pure.itu.dk/portal/files/85540747/4.19_An_Ontological_Meta_Model_for_Game_Resea.pdf
Abeele, V. V., Spiel, K., Nacke, L., Johnson, D., & Gerling, K. (2020). Development and validation of the player experience inventory: A scale to measure player experiences at the level of functional and psychosocial consequences. International Journal of Human-Computer Studies, 135, 102370. https://doi.org/10.1016/j.ijhcs.2019.102370
Brown, E., & Cairns, P. (2004). A grounded investigation of game immersion. Extended Abstracts of the 2004 Conference on Human Factors and Computing Systems - CHI ’04. https://doi.org/10.1145/985921.986048
Caroux, L., Isbister, K., le Bigot, L., & Vibert, N. (2015). Player–video game interaction: A systematic review of current concepts. Computers in Human Behavior, 48, 366–381. https://doi.org/10.1016/j.chb.2015.01.066
Carpenter, S. (2017). Ten Steps in Scale Development and Reporting: A Guide for Researchers. Communication Methods and Measures, 12(1), 25–44. https://doi.org/10.1080/19312458.2017.1396583
Carter, M., Downs, J., Nansen, B., Harrop, M., & Gibbs, M. (2014). Paradigms of games research in HCI: A Review of 10 Years of Research at CHI. Proceedings of the First ACM SIGCHI Annual Symposium on Computer-Human Interaction in Play. https://doi.org/10.1145/2658537.2658708
Cohen, J. (1960). A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20(1), 37–46. https://doi.org/10.1177/001316446002000104
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334. https://doi.org/10.1007/bf02310555
Csikszentmihalyi, M. (1990). Flow: The Psychology of Optimal Experience (1st ed.). Harper & Row.
Deci, E. L., & Ryan, R. M. (1985). Intrinsic motivation and self-determination in human behavior. Perspectives in Social Psychology.
DiStefano, C., Zhu, M., & Mîndrilã, D. (2009). Understanding and Using Factor Scores: Considerations for the Applied ResearcherApplied Researcher. Practical Assessment, Research, and Evaluation, 14(14). https://doi.org/10.7275/da8t-4g52
Drachen, A., Mirza-Babaei, P., & Nacke, L. (2018). Games User Research. Oxford University Press.
Factor Analysis: Varimax Rotation. (2017). The SAGE Encyclopedia of Communication Research Methods. https://doi.org/10.4135/9781483381411.n191
Fang, X., Chan, S., Brzezinski, J., & Nair, C. (2010). Development of an Instrument to Measure Enjoyment of Computer Game Play. International Journal of Human-Computer Interaction, 26(9), 868–886. https://doi.org/10.1080/10447318.2010.496337
Fang, X., Zhang, J., & Chan, S. S. (2013). Development of an Instrument for Studying Flow in Computer Game Play. International Journal of Human-Computer Interaction, 29(7), 456–470. https://doi.org/10.1080/10447318.2012.715991
Fischer, R., & Karl, J. A. (2019). A Primer to (Cross-Cultural) Multi-Group Invariance Testing Possibilities in R. Frontiers in Psychology, 10. https://doi.org/10.3389/fpsyg.2019.01507
Gaskin, J. [James Gaskin]. (2013, May 2). SEM Series Part 2: Data Screening [Video]. YouTube. https://www.youtube.com/watch?v=1KuM5e0aFgU
Gibbs, A. (1997). Social Research Update 19: Focus Groups. Social Research Update - University of Surrey. Retrieved 10 January 2022, from https://sru.soc.surrey.ac.uk/SRU19.html
Glen, S. (2018, August 14). Convergent Validity and Discriminant Validity: Definition, Examples. Statistics How To. https://www.statisticshowto.com/convergent-validity/
Gutman, J. (1982). A Means-End Chain Model Based on Consumer Categorization Processes. Journal of Marketing, 46(2), 60. https://doi.org/10.2307/3203341
Hunicke, R., LeBlanc, M., & Zubek, R. (2004). MDA: A Formal Approach to Game Design and Game Research. Proceedings of the AAAI Workshop on Challenges in Game AI, 4(1), 1722. https://users.cs.northwestern.edu/~hunicke/MDA.pdf
Jennett, C., Cox, A. L., Cairns, P., Dhoparee, S., Epps, A., Tijs, T., & Walton, A. (2008). Measuring and defining the experience of immersion in games. International Journal of Human-Computer Studies, 66(9), 641–661. https://doi.org/10.1016/j.ijhcs.2008.04.004
Johnson, D., Gardner, M. J., & Perry, R. (2018). Validation of two game experience scales: The Player Experience of Need Satisfaction (PENS) and Game Experience Questionnaire (GEQ). International Journal of Human-Computer Studies, 118, 38–46. https://doi.org/10.1016/j.ijhcs.2018.05.003
Lankoski, P., & Holopainen, J. (2017). Game Design Research. Amsterdam University Press.
Law, E. L. C., Brühlmann, F., & Mekler, E. D. (2018). Systematic Review and Validation of the Game Experience Questionnaire (GEQ) - Implications for Citation and Reporting Practice. Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play. https://doi.org/10.1145/3242671.3242683
Moore, G. C., & Benbasat, I. (1991). Development of an Instrument to Measure the Perceptions of Adopting an Information Technology Innovation. Information Systems Research, 2(3), 192–222. https://doi.org/10.1287/isre.2.3.192
Nabi, R. L., & Krcmar, M. (2004). Conceptualizing Media Enjoyment as Attitude: Implications for Mass Media Effects Research. Communication Theory, 14(4), 288–310. https://doi.org/10.1111/j.1468-2885.2004.tb00316.x
Nah, F. F. H., Eschenbrenner, B., Zeng, Q., Telaprolu, V. R., & Sepehr, S. (2014). Flow in gaming: literature synthesis and framework development. International Journal of Information Systems and Management, 1(1/2), 83. https://doi.org/10.1504/ijisam.2014.062288
Newsom, J. T. (2017). Exploratory and Confirmatory Factor Analysis [Slides]. Portland State University. http://web.pdx.edu/~newsomj/pmclass/EFA%20and%20CFA.pdf
Office of Advanced Research Computing UCLA. (n.d.-a). A Practical Introduction to Factor Analysis. Office of Advanced Research Computing Statistical Methods and Data Analytics. Retrieved 11 January 2022, from https://stats.oarc.ucla.edu/spss/seminars/introduction-to-factor-analysis/
Office of Advanced Research Computing UCLA. (n.d.-b). Confirmatory Factor Analysis (CFA) in R with lavaan. Office of Advanced Research Computing UCLA Statistical Methods and Data Analytics. Retrieved 11 January 2022, from https://stats.oarc.ucla.edu/r/seminars/rcfa/
Perski, O., Blandford, A., West, R., & Michie, S. (2016). Conceptualising engagement with digital behaviour change interventions: a systematic review using principles from critical interpretive synthesis. Translational Behavioral Medicine, 7(2), 254–267. https://doi.org/10.1007/s13142-016-0453-1
IJsselsteijn, W. A., de Kort, Y. A. W., & Poels, K. (2007). Game Experience Questionnaire: development of a self-report measure to assess the psychological impact of digital games. Technische Universiteit Eindhoven. https://research.tue.nl/en/publications/d33-game-experience-questionnaire-development-of-a-self-report-me
Ravaja, N. (2009). The Psychophysiology of Digital Gaming: The Effect of a Non Co-Located Opponent. Media Psychology, 12(3), 268–294. https://doi.org/10.1080/15213260903052240
Salminen, M., & Ravaja, N. (2007). Oscillatory Brain Responses Evoked by Video Game Events: The Case of Super Monkey Ball 2. CyberPsychology & Behavior, 10(3), 330–338. https://doi.org/10.1089/cpb.2006.9947
Takatalo, J., Häkkinen, J., Kaistinen, J., & Nyman, G. (2009). Presence, Involvement, and Flow in Digital Games. Evaluating User Experience in Games, 23–46. https://doi.org/10.1007/978-1-84882-963-3_3
Zamboni, J. (2019, March 2). The Advantages of a Large Sample Size. Sciencing. https://sciencing.com/advantages-large-sample-size-7210190.html
Zillmann, D. (2015). Mood Management: Using Entertainment to the Full Advantage. Communication, Social Cognition, and Affect (PLE: Emotion), 163–188. https://doi.org/10.4324/9781315743974-16
Bibliography
Aarseth, E., & Grabarczyk, P. (2018). An Ontological Meta-Model for Game Research. DiGRA ’18 - Proceedings of the 2018 DiGRA International Conference: The Game Is the Message. https://pure.itu.dk/portal/files/85540747/4.19_An_Ontological_Meta_Model_for_Game_Resea.pdf
In-text citation
Abeele, V. V., Spiel, K., Nacke, L., Johnson, D., & Gerling, K. (2020). Development and validation of the player experience inventory: A scale to measure player experiences at the level of functional and psychosocial consequences. International Journal of Human-Computer Studies, 135, 102370. https://doi.org/10.1016/j.ijhcs.2019.102370
Brown, E., & Cairns, P. (2004). A grounded investigation of game immersion. Extended Abstracts of the 2004 Conference on Human Factors and Computing Systems - CHI ’04. https://doi.org/10.1145/985921.986048
Caroux, L., Isbister, K., le Bigot, L., & Vibert, N. (2015). Player–video game interaction: A systematic review of current concepts. Computers in Human Behavior, 48, 366–381. https://doi.org/10.1016/j.chb.2015.01.066
Carpenter, S. (2017). Ten Steps in Scale Development and Reporting: A Guide for Researchers. Communication Methods and Measures, 12(1), 25–44. https://doi.org/10.1080/19312458.2017.1396583
Carter, M., Downs, J., Nansen, B., Harrop, M., & Gibbs, M. (2014). Paradigms of games research in HCI: A Review of 10 Years of Research at CHI. Proceedings of the First ACM SIGCHI Annual Symposium on Computer-Human Interaction in Play. https://doi.org/10.1145/2658537.2658708
Cohen, J. (1960). A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20(1), 37–46. https://doi.org/10.1177/001316446002000104
Cronbach, L. J. (1951). Coefficient alpha and the internal structure of tests. Psychometrika, 16(3), 297–334. https://doi.org/10.1007/bf02310555
Csikszentmihalyi, M. (1990). Flow: The Psychology of Optimal Experience (1st ed.). Harper & Row.
Deci, E. L., & Ryan, R. M. (1985). Intrinsic motivation and self-determination in human behavior. Perspectives in Social Psychology.
DiStefano, C., Zhu, M., & Mîndrilã, D. (2009). Understanding and Using Factor Scores: Considerations for the Applied ResearcherApplied Researcher. Practical Assessment, Research, and Evaluation, 14(14). https://doi.org/10.7275/da8t-4g52
Drachen, A., Mirza-Babaei, P., & Nacke, L. (2018). Games User Research. Oxford University Press.
Factor Analysis: Varimax Rotation. (2017). The SAGE Encyclopedia of Communication Research Methods. https://doi.org/10.4135/9781483381411.n191
Fang, X., Chan, S., Brzezinski, J., & Nair, C. (2010). Development of an Instrument to Measure Enjoyment of Computer Game Play. International Journal of Human-Computer Interaction, 26(9), 868–886. https://doi.org/10.1080/10447318.2010.496337
Fang, X., Zhang, J., & Chan, S. S. (2013). Development of an Instrument for Studying Flow in Computer Game Play. International Journal of Human-Computer Interaction, 29(7), 456–470. https://doi.org/10.1080/10447318.2012.715991
Fischer, R., & Karl, J. A. (2019). A Primer to (Cross-Cultural) Multi-Group Invariance Testing Possibilities in R. Frontiers in Psychology, 10. https://doi.org/10.3389/fpsyg.2019.01507
Gaskin, J. [James Gaskin]. (2013, May 2). SEM Series Part 2: Data Screening [Video]. YouTube. https://www.youtube.com/watch?v=1KuM5e0aFgU
Gibbs, A. (1997). Social Research Update 19: Focus Groups. Social Research Update - University of Surrey. Retrieved 10 January 2022, from https://sru.soc.surrey.ac.uk/SRU19.html
Glen, S. (2018, August 14). Convergent Validity and Discriminant Validity: Definition, Examples. Statistics How To. https://www.statisticshowto.com/convergent-validity/
Gutman, J. (1982). A Means-End Chain Model Based on Consumer Categorization Processes. Journal of Marketing, 46(2), 60. https://doi.org/10.2307/3203341
Hunicke, R., LeBlanc, M., & Zubek, R. (2004). MDA: A Formal Approach to Game Design and Game Research. Proceedings of the AAAI Workshop on Challenges in Game AI, 4(1), 1722. https://users.cs.northwestern.edu/~hunicke/MDA.pdf
Jennett, C., Cox, A. L., Cairns, P., Dhoparee, S., Epps, A., Tijs, T., & Walton, A. (2008). Measuring and defining the experience of immersion in games. International Journal of Human-Computer Studies, 66(9), 641–661. https://doi.org/10.1016/j.ijhcs.2008.04.004
Johnson, D., Gardner, M. J., & Perry, R. (2018). Validation of two game experience scales: The Player Experience of Need Satisfaction (PENS) and Game Experience Questionnaire (GEQ). International Journal of Human-Computer Studies, 118, 38–46. https://doi.org/10.1016/j.ijhcs.2018.05.003
Lankoski, P., & Holopainen, J. (2017). Game Design Research. Amsterdam University Press.
Law, E. L. C., Brühlmann, F., & Mekler, E. D. (2018). Systematic Review and Validation of the Game Experience Questionnaire (GEQ) - Implications for Citation and Reporting Practice. Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play. https://doi.org/10.1145/3242671.3242683
Moore, G. C., & Benbasat, I. (1991). Development of an Instrument to Measure the Perceptions of Adopting an Information Technology Innovation. Information Systems Research, 2(3), 192–222. https://doi.org/10.1287/isre.2.3.192
Nabi, R. L., & Krcmar, M. (2004). Conceptualizing Media Enjoyment as Attitude: Implications for Mass Media Effects Research. Communication Theory, 14(4), 288–310. https://doi.org/10.1111/j.1468-2885.2004.tb00316.x
Nah, F. F. H., Eschenbrenner, B., Zeng, Q., Telaprolu, V. R., & Sepehr, S. (2014). Flow in gaming: literature synthesis and framework development. International Journal of Information Systems and Management, 1(1/2), 83. https://doi.org/10.1504/ijisam.2014.062288
Newsom, J. T. (2017). Exploratory and Confirmatory Factor Analysis [Slides]. Portland State University. http://web.pdx.edu/~newsomj/pmclass/EFA%20and%20CFA.pdf
Office of Advanced Research Computing UCLA. (n.d.-a). A Practical Introduction to Factor Analysis. Office of Advanced Research Computing Statistical Methods and Data Analytics. Retrieved 11 January 2022, from https://stats.oarc.ucla.edu/spss/seminars/introduction-to-factor-analysis/
Office of Advanced Research Computing UCLA. (n.d.-b). Confirmatory Factor Analysis (CFA) in R with lavaan. Office of Advanced Research Computing UCLA Statistical Methods and Data Analytics. Retrieved 11 January 2022, from https://stats.oarc.ucla.edu/r/seminars/rcfa/
Perski, O., Blandford, A., West, R., & Michie, S. (2016). Conceptualising engagement with digital behaviour change interventions: a systematic review using principles from critical interpretive synthesis. Translational Behavioral Medicine, 7(2), 254–267. https://doi.org/10.1007/s13142-016-0453-1
IJsselsteijn, W. A., de Kort, Y. A. W., & Poels, K. (2007). Game Experience Questionnaire: development of a self-report measure to assess the psychological impact of digital games. Technische Universiteit Eindhoven. https://research.tue.nl/en/publications/d33-game-experience-questionnaire-development-of-a-self-report-me
Ravaja, N. (2009). The Psychophysiology of Digital Gaming: The Effect of a Non Co-Located Opponent. Media Psychology, 12(3), 268–294. https://doi.org/10.1080/15213260903052240
Rigby, S., & Ryan, R. (2007, September). The Player Experience of Need Satisfaction (PENS) An applied model and methodology for understanding key components of the player experience. Immersyve Inc. https://natronbaxter.com/wp-content/uploads/2010/05/PENS_Sept07.pdf
Ryan, R. M., Rigby, C. S., & Przybylski, A. (2006). The Motivational Pull of Video Games: A Self-Determination Theory Approach. Motivation and Emotion, 30(4), 344–360. https://doi.org/10.1007/s11031-006-9051-8
Salminen, M., & Ravaja, N. (2007). Oscillatory Brain Responses Evoked by Video Game Events: The Case of Super Monkey Ball 2. CyberPsychology & Behavior, 10(3), 330–338. https://doi.org/10.1089/cpb.2006.9947
Sim, J., & Waterfield, J. (2019). Focus group methodology: some ethical challenges. Quality & Quantity, 53(6), 3003–3022. https://doi.org/10.1007/s11135-019-00914-5
Smithson, J. (2000). Using and analysing focus groups: Limitations and possibilities. International Journal of Social Research Methodology, 3(2), 103–119. https://doi.org/10.1080/136455700405172
Sweetser, P., & Wyeth, P. (2005). GameFlow. Computers in Entertainment, 3(3), 3. https://doi.org/10.1145/1077246.1077253
Takatalo, J., Häkkinen, J., Kaistinen, J., & Nyman, G. (2009). Presence, Involvement, and Flow in Digital Games. Evaluating User Experience in Games, 23–46. https://doi.org/10.1007/978-1-84882-963-3_3
webteam@gcu.ac.uk, & Baker, R. (n.d.). Q Sort. Glasgow Caledonian University. https://www.gcu.ac.uk/endoflife/qsort/
Xu, H., & Tracey, T. J. G. (2017). Use of Multi-Group Confirmatory Factor Analysis in Examining Measurement Invariance in Counseling Psychology Research. The European Journal of Counselling Psychology, 6(1). https://doi.org/10.5964/ejcop.v5i2.120
Zamboni, J. (2019, March 2). The Advantages of a Large Sample Size. Sciencing. https://sciencing.com/advantages-large-sample-size-7210190.html
Zillman, D. (1988). Mood Management Through Communication Choices. American Behavioral Scientist, 31(3), 327–340. https://doi.org/10.1177/000276488031003005
Zillmann, D. (2015). Mood Management: Using Entertainment to the Full Advantage. Communication, Social Cognition, and Affect (PLE: Emotion), 163–188. https://doi.org/10.4324/9781315743974-16