This article has been published at the Dagstuhl Seminar 17471 "Artificial and Computational Intelligence in Games: AI-Driven Game Design". The original publication, along with its bibtex entry and other information can be found here.

Mixed-Media Game AI
Joint work by:
Antonios Liapis, Elisabeth André, Sander C.J. Bakkes, Rafael Bidarra, Steve Dahlskog, Mirjam P. Eladhari, Ana Paiva, Mike Preuß, Gillian Smith, Anne Sullivan, Tommy Thompson, David Thue, Georgios N. Yannakakis, R. Michael Young

Over the last decades, digital technologies have moved away from the personal computer (PC) into cloud computing, ubiquitous computing, intelligent robots and smart devices. From wearable technologies to remote-controlled household items and from sensors for crowd control to personal drones, a growing range of technologies provides an opportunity for games and artificial intelligence. While artificial intelligence (AI) is already a big part of the Internet of Things, raising concerns in terms of ethics and politics [14], research in game AI has been relatively confined to more traditional computer and video games. Relevant work on wearable technologies as game controllers [13], mixed- or virtual-reality rendering [8], technology-enhanced play in playgrounds [5] or board games [6], or social robots for games [3], has not taken full advantage of AI for controlling or mediating the experience. Using the term mixed-media to refer broadly to any media, digital or otherwise, aside from the game loop within a PC or a game-specific database, this working group attempted to map out the broad topic of mixed-media in terms of its applications for game AI.

Potentially, AI could assist games in taking advantage of mixed-media in two principal ways: use input from non-game sources within the game, and move elements of play or gameplay outcomes into non-PC outputs. The usefulness of non-game information as input to gameplay is primarily in the identification of context. Context can be personal (e.g. information relevant to this particular player) or broader and anonymous (e.g. number of people in the player's vicinity or trending topics on Twitter). Context can come from the actual environment of a player via sensors (e.g. temperature sensors on a player's mobile phone, or surveillance logs in public spaces such as museums), from a variant game controller (e.g. speech detection or face detection as an explicit part of gameplay), from social relationships (e.g. based on social media profiles, real life proximity, or a history of social interaction), from game histories (e.g. past gameplay habits in other games), from cultural histories (e.g. players' demographic data linking to cultural heritage databases), or from temporal context (e.g. the current time of day or date, or the time passed since the user interacted with the game).

On the other hand, the output of such applications (playful or not) can also be delivered in non-traditional media beyond digital screens of a PC or mobile device. Closer to traditional digital outputs, mixed- or virtual-reality devices could be considered, along with environmental projections (such as large screens or projections on different parts of a building, in the case of multiplayer games played in a shared environment). On a similar (and familiar) direction, output or intermediate states of a long-lasting game can be shared on social media, potentially soliciting other users' feedback as additional input (to provide even more context, as discussed above). A game's final output, such as a drawing in a collaborative drawing game [11] or an AI-assisted drawing tool [4], could be printed out in paper, fabric [1], or via 3D printing [9, 10], or sonified into a musical piece [12]. More ambitiously, the intermediate states of such a game could be used to control robots, wearable actuators (worn by players), or smart home devices. Obviously, the further the output is from traditional game output, the more challenging the design problem of making such an application a seamless, playful experience while ensuring that the game output is understood as such and –when moving into the real world– remains safe to use and respects players' privacy.

Following an initial mapping of the possibility space for mixed-media game AI, including possible audiences, purposes and challenges, the working group focused on three specific use cases. The first use case focused on textile input to physical output for MEND, a platform where people contribute to a communal art piece (projected on a wall) by scanning embroidered physical objects, one person at a time. The second use case focused on physical input to virtual output, for a game designed around the use of a sensor able to detect laughter in groups [7] as a mediator for when a game session is won and by whom. The final use case problematized the topic of input more broadly, trying to identify context within player's game data; the main question revolved around whether there are signals or log data which can be assumed to be independent of the game context [2] but able to capture the player's context in terms of experience. The broad nature of mixed-media game AI was thus mapped out, and a small part of the design and problem space it can offer was explored.

References

[1] Lea Albaugh, April Grow, Chenxi Liu, James McCann, Jen Mankoff, and Gillian Smith. Threadsteading: Playful interaction for textile fabrication devices. In Installation in Interactivity at ACM Conference on Computer-Human Interaction, 2016.

[2] Elizabeth Camilleri, Georgios N. Yannakakis, and Antonios Liapis. Towards general models of player affect. In Proceedings of the International Conference on Affective Computing and Intelligent Interaction, 2017.

[3] Filipa Correia, Patrícia Alves-Oliveira, Nuno Maia, Tiago Ribeiro, Sofia Petisca, Francisco S Melo, and Ana Paiva. Just follow the suit! trust in human-robot interactions during card game playing. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pages 507–512. IEEE, 2016.

[4] Nicholas Davis, Yanna Popova, Ivan Sysoev, Chih-Pin Hsiao, Dingtian Zhang, and Brian Magerko. Building artistic computer colleagues with an enactive model of creativity. In Proceedings of International Conference on Computational Creativity, 2014.

[5] Die Gute Fabrik. Johann Sebastian Joust, 2014.

[6] Fantasy Flight Games. Mansions of Madness, 2nd ed., 2016.

[7] Simon Flutura, Johannes Wagner, Florian Lingenfelser, Andreas Seiderer, and Elisabeth André. Laughter detection in the wild: demonstrating a tool for mobile social signal processing and visualization. In Proceedings of the ACM International Conference on Multimodal Interaction, 2016.

[8] Alex Hern. Will 2017 be the year virtual reality gets real? 2016. The Guardian. Online. Last accessed 9 Jan 2018.

[9] Britton Horn, Gillian Smith, Rania Masri, and Janos Stone. Visual information vases: Towards a framework for transmedia creative inspiration. In Proceedings of the International Conference on Computational Creativity, 2015.

[10] Joel Lehman, Sebastian Risi, and Jeff Clune. Creative generation of 3D objects with deep learning and innovation engines. In Proceedings of the International Conference on Computational Creativity, 2016.

[11] Antonios Liapis, Amy K. Hoover, Georgios N. Yannakakis, Constantine Alexopoulos, and Evangelia V. Dimaraki. Motivating visual interpretations in Iconoscope: Designing a game for fostering creativity. In Proceedings of the Foundations of Digital Games Conference, 2015.

[12] Marco Scirea, Gabriella A. B. Barros, Noor Shaker, and Julian Togelius. SMUG: Scientific Music Generator. In Proceedings of International Conference on Computational Creativity, 2015.

[13] Joshua Tanenbaum, Karen Tanenbaum, Katherine Isbister, Kaho Abe, Anne Sullivan, and Luigi Anzivino. Costumes and wearables as game controllers. In Proceedings of the International Conference on Tangible, Embedded, and Embodied Interaction, 2015.

[14] The Future of Life Institute. An open letter to the united nations convention on certain conventional weapons. 2017. Online. Accessed 9 January 2018.

This article has been published at the Dagstuhl Seminar 17471 "Artificial and Computational Intelligence in Games: AI-Driven Game Design". The original publication, along with its bibtex entry and other information can be found here.