This article has been published at the Dagstuhl Seminar 24261 "Computational Creativity for Game Development". The original publication, along with its bibtex entry and other information can be found soon.
The last five years were a watershed moment in bringing AI technologies to the forefront in applications, web browsers, dedicated (paid) services, but also in the public discourse, courtrooms, and creators’ circles. The research, financial, and collective interest is triggered by important AI advances that manage to produce artifacts in domains we would consider creative [1] (such as text, images, audio, movies, and code) via data-hungry AI models trained on data which are available in the worldwide web but which are not intended as training data. The dubious ethics of such practices and the closed-source nature of such trained models leave AI academics and human creators upset and concerned [2, 3]. Beyond obvious ethical issues of exploitation and copyright infringement, we identify a looming threat of data scarcity. If current large AI models use the majority—if not the entirety—of the world wide web, where would new training data come from? While new human data currently takes the form of labels and annotations by crowdworkers in the Global South [4], it is not unrealistic to envisage a near-future where expoited crowdworkers are required to produce creative pieces or art to train AI models—without an intrinsic motivation to create.
Moreover, the availability of seemingly cogent AI outputs or the low-stakes interaction with the AI (without needing technical expertise or cumbersome software libraries) has changed human perspectives and processes in creative domains, education, and everyday life. Indicatively, education has so far been realized via interactions between learner and educator (in a top-down fashion) and via peer-to-peer collaboration. AI automation tends to isolate a learner, placing them in an adversarial relationship with the educator who is expected to act as a discriminator of AI-generated or AI-curated reports. Such activities used to be social, communal efforts where sharing opinions and perspectives was critical for satisfying the highest-level human needs [5] such as learning, creativity, or self-actualization. We argue that human-human interaction, either via a peer-based bottom-up ideation process or via some expertise gap (such as learner and educator or client and commissioned artist) is threatened by trivialized AI-human interactions. These interactions are not only trivialized because the AI output may seem novel initially but loses its novelty over time [6], or because of the potential for factual errors [7]. We argue that such interactions are trivialized precisely due to the speed of AI responses. In line with Kahneman’s distinction between fast and slow thinking [8], instantaneous AI outputs with a complete artifact (e.g. an art piece) hinder human slow thinking and block the potential for iteration, reframing [9], or mediated consensus creation [10]. Creative thinking not only benefits from interventions by other humans and external (even random) stimuli [11], but also consists of a slow, introspective, autotelic process of self-doubt, frustration [12], trial and error, reflection [13] and Eureka moments [14].
Through discussion, we identified three cases where a communal approach (with meaningful human-human interaction) would have a strong impact: (a) education strategies that counter tendencies of generative AI; (b) AI corporate strategies that empower their workers; (c) art critique of the aesthetics of generative AI. Below, we report the high-level outcomes of the discussions for these three use-cases.
An obvious challenge for educators in the current (and near-future) age of ChatGPT and relevant AI solutions [15] is the writing of student reports in an automated or semi-automated way. Automated processes for detecting AI-generated texts remains underwhelming [16], and the potential for false positives in graded work makes such solutions unpalatable. Therefore, more fundamental changes are needed towards modern pedagogies. Importantly, we identify that demonizing AI and labeling it as a (blanket) taboo would likely have the opposite outcome. Improving AI literacy, especially at a younger age, would instead be required in order for learners to understand the strengths, weaknesses, and caveats of AI use in their coursework but also in their everyday life. Ideally, such AI literacy would come from inductive teaching that showcases to the learner firsthand how AI can fail at tasks that demand creativity, knowledge synthesis and critical thinking.
At a tertiary education level, a likely strategy to counter reliance on generative AI requires pivoting from assessment based on single-author reports towards more practical projects that involve teamwork, as well as introducing peer assessment as a (non-graded) activity. This is not applicable to all disciplines, admittedly, but would lend itself well for most game studies and game development courses—except perhaps foundational courses on programming or theory. As a more practical use-case for game development education, we formalized an exercise that solicits the self-realization of biases and limitations of generative AI as well as the benefits of collaboration between human experts in different fields. The exercise takes the form and principles of a game jam [17], an intense game creation process where multiple teams compete to make the best game in a short timeframe—often a couple of days. In this exercise, a few teams would be formed with one human artist and one human programmer. All other teams would either consist of one artist, who would be invited to use code generation AI models [18] (and of course access all online resources and tutorials available to everyone), or consist of one programmer who would be invited to use generated art for their game. For the sake of implementability, the proposed exercise overlooks many other vital roles in game development such as writer, game designer, or musician. We expect that the exercise would highlight (a) the limited controllability and output novelty of generative AI and (b) the unique ideas emerging through friction and negotiation with a human colleague.
Unlike the education use-case, this working group adopted a more tech-optimistic view of current (and likely future) AI technologies. Assuming that AI automation can reduce friction and help collaboration between different sectors of a business, AI automation could be set up in human-like ways. Such a setup would free human resources, leaving workers more free to pursue fewer hours of intellectual work compared to many hours of menial work. Moreover, if most tasks could be automated to a satisfactory—even if not human-competitive—level, workers could move freely within the structure and take up different tasks while acting as a human-in-the-loop for the AI handling that task. This flexibility would lower the chances of a burnout and, coupled with fewer hours that consist only of meaningful (and ostensibly rewarding) work, would lead to a happier workforce. Importantly, the envisaged solution would require a different work structure with more empowered workers with incentives to perform well, such as partnerships and company stocks. It is worth noting that the envisioned solution overlooks a number of (more pressing) concerns, indicatively that (a) menial tasks that could be automated would lead to jobs lost for people with these exact skills, (b) current AI "automation" often involves humans-in-the-loop or training data from exploited workers [4] who could remain overlooked (and unrecognized) in the envisioned work structure. Therefore, for such company practices to be sustainable and ethical we presuppose a workforce educated in AI and digital skills, as well as legislation and/or new standards that leverage AI without exploitative practices.
Taking a different view on AI literacy to the two previous use-cases, this workgroup identified art appreciation as a way to value, critique and review Generative AI models in terms of their output. Art practice is founded on such rotes, from individual letter correspondence between artists [19], gallery visits by peers [20], and discussions within artistic brotherhoods in the 1800s [21] and Discord servers in the 2020s [22]. Art historians, similarly, study the trends of an art movement and the deliberate additions by an individual creator within that context. Moving into the realm of AI models, the critique here would assess the workings of the models and their internal biases—rather than deliberate brush strokes on one painting.
As with different creative domains (writing, painting, sculpture, etc.), a common language is likely needed to review AI models and potentially different types of AI output such as generative text, art, video, music, etc. We envision AI model reviews to focus more on use-cases where the model performs well, along with recommendations for domains, applications and aesthetics that the AI model is suited for. It is essential that such a vocabulary is not imposed from the top down by computer scientists (or worse, the corporate shareholders attempting to hype their product). Instead, this vocabulary and pertinent aesthetics should emerge from the bottom up through cultural stakeholders. These stakeholders range from amateurs experimenting with the new tools—some of this collaborative meaning-making is already taking place on Discord servers [22]—to creatives and/or art experts such as gallery curators. Reaching a consensus among these diverse cultural groups will likely not be immediate or easy, but we argue that such a vocabulary will inevitably coalesce—if precedents in traditional art movements [23] are any indication. We envision that such critique could become normalized through modern dissemination practices such as zines [24] or even exhibitions. Admittedly, an ecosystem of human reviewers of AI models presupposes a level of AI literacy (and perhaps a tech-optimism) that current creative circles and art critics lack. On the other hand, we envision that a normalization of AI model reviews would enhance AI literacy (under specific perspectives and use-cases) within the art world and the general public.
New methods of human-AI interaction and the emergence of "AI companies" necessitate a review of current practices within our everyday lives, and how those might change in the near- or mid-term. The three working groups described above tackled very different issues of everyday life (education, business, and art) through different positions in the tech-optimism versus tech-pessimism spectrum. However, all working groups identified the crucial role that AI literacy (and by association, critical thinking skills regarding AI process, output, and capacities) will play for everyone moving forward. Moreover, all working groups emphasized the need for bottom-up movements to empower human stakeholders in a meaningful way that fosters community, rather than in an adversarial (e.g. students versus educators, or AI evangelists versus traditional artists) or exploitative fashion. The premise, topics, and outcomes of the working groups extend beyond game research or game development. However, games as a medium, gamification as a set of design patterns [25], and play as an activity [26, 27] could facilitate both AI literacy (e.g. [28]) and community-building (e.g. [29]). While AI is likely to impact our everyday lives and society in foreseeable and unforeseeable ways, we hold hope that bottom-up movements and a communal effort will rise up to address the new challenges.
[1] S. Colton and G. A. Wiggins, "Computational creativity: the final frontier?" in Proceedings of the 20th European Conference on Artificial Intelligence, 2012.
[2] J. Togelius and G. N. Yannakakis, "Choose your weapon: Survival strategies for depressed AI academics [point of view]," Proceedings of the IEEE, vol. 112, no. 1, pp. 4–11, 2024.
[3] C. E. Lamb and D. G. Brown, "Should we have seen the coming storm? Transformers, society, and CC," in Proceedings of the International Conference on Computational Creativity, 2023.
[4] A. Williams, M. Miceli, and T. Gebru, "The exploited labor behind artificial intelligence," https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence, 2022, accessed 21 August 2024.
[5] A. Maslow, "A theory of human motivation," Psychological Review, vol. 50, no. 4, 1943.
[6] E. Zhou and D. Lee, "Generative artificial intelligence, human creativity, and art," PNAS Nexus, vol. 3, 2024.
[7] M. Karpinska and M. Iyyer, "Large language models effectively leverage document-level context for literary translation, but critical errors persist," in Proceedings of the Machine Translation Conference, 2023.
[8] D. Kahneman, Thinking, Fast and Slow. Farrar, Straus and Girou, 2013.
[9] T. Scaltsas and C. Alexopoulos, "Creating creativity through emotive thinking," in Proceedings of the World Congress of Philosophy, 2013.
[10] J. R. Hollenbeck, "The role of editing in knowledge development: Consensus shifting and consensus creation," in Opening the black box of editorship. Springer, 2008, pp. 16–26.
[11] M. Beaney, Imagination and creativity. Open University Milton Keynes, UK, 2005.
[12] S. Savvani, "Emotions and challenges during game creation: Evidence from the global game jam," in Proceedings of the 14th European conference on games based learning, 2020, pp. 507–514.
[13] D. Boud, R. Keogh, and D. Waker, Promoting reaction in learning. A model. Reflection: Turning experience into learning. London: Routledge in association with The Open University, 1996, pp. 32–56.
[14] M. Bilali´c, M. Graf, N. Vaci, and A. H. Danek, "The temporal dynamics of insight problem solving–restructuring might not always be sudden," Thinking & Reasoning, vol. 27, no. 1, pp. 1–37, 2021.
[15] T. F¨utterer, C. Fischer, A. Alekseeva, X. Chen, T. Tate, M. Warschauer, and P. Gerjets, "ChatGPT in education: global reactions to AI innovations," Scientific reports, vol. 13, no. 1, 2023.
[16] C. Chaka, "Detecting ai content in responses generated by ChatGPT, YouChat, and Chatsonic: The case of five AI content detection tools," Journal of Applied Learning and Teaching, vol. 6, no. 2, 2023.
[17] A. Kultima, "Defining game jam," in Proceedings of Foundations of Digital Games Conference, 2015.
[18] S. Imai, "Is GitHub copilot a substitute for human pair-programming? An empirical study," in Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Companion Proceedings, 2022.
[19] P. Grant, The Letters of Vincent van Gogh: A Critical Study. Athabasca University Press, 2014.
[20] A. Antoniou, I. Lykourentzou, A. Liapis, D. Nikolou, and M. Konstantinopoulou, ""What Artists Want": Elicitation of artist requirements to feed the design on a new collaboration platform for creative work," 2021. [Online]. Available: https://arxiv.org/abs/2110.02930
[21] L. Morowitz and W. Vaughan, Artistic Brotherhoods in the Nineteenth Century. Routledge, 2000.
[22] J. McCormack, M. T. Llano Rodriguez, S. Krol, and N. Rajcic, "No longer trending on artstation: Prompt analysis of generative ai art," in Proceedings of the International Conference on Evolutionary and Biologically Inspired Music, Sound, Art and Design, 2024.
[23] H. Ball, "Dada manifesto," 1916.
[24] S. E. Thomas, "Value and validity of art zines as an art form," Art Documentation: Journal of the Art Libraries Society of North America, vol. 28, no. 2, pp. 27–38, 2009.
[25] S. Deterding, D. Dixon, R. Khaled, and L. Nacke, "From game design elementsto gamefulness: defining "gamification"," in Proceedings of the International Academic MindTrek Conference, 2011.
[26] A. Liapis, C. Guckelsberger, J. Zhu, C. Harteveld, S. Kriglstein, A. Denisova, J. Gow, and M. Preuss, "Designing for playfulness in human-AI authoring tools," in Proceedings of the FDG workshop on Human-AI Interaction Through Play, 2023.
[27] J. Zhu, G. Chanel, M. Cook, A. Denisova, C. Harteveld, and M. Preuss, "Human-AI collaboration through play," in Human-Game AI Interaction (Dagstuhl Seminar 22251), D. Ashlock, S. Maghsudi, D. P. Liebana, P. Spronck, and M. Eberhardinger, Eds. Dagstuhl, Germany: Schloss Dagstuhl – Leibniz-Zentrum fur Informatik, 2022.
[28] M. Zammit, I. Voulgari, A. Liapis, and G. N. Yannakakis, "The road to AI literacy education: From pedagogical needs to tangible game design," in Proceedings of the European Conference on Games Based Learning, 2021.
[29] A. Kultima, K. Alha, and T. Nummenmaa, "Building Finnish game jam community through positive social facilitation," in Proceedings of the International Academic Mindtrek Conference, 2016.
This article has been published at the Dagstuhl Seminar 24261 "Computational Creativity for Game Development". The original publication, along with its bibtex entry and other information can be found soon.