Multiple Types of AI and Their Performance in Video Games

Authors

  • Iulian PRĂJESCU Babes-Bolyai University, Faculty of Mathematics and Computer Science, Cluj-Napoca, Romania Email address: alina.calin@ubbcluj.ro
  • Alina Delia CĂLIN Babes-Bolyai University, Faculty of Mathematics and Computer Science, Cluj-Napoca, Romania Email address: alina.calin@ubbcluj.ro https://orcid.org/0000-0001-7363-4934

DOI:

https://doi.org/10.24193/subbi.2022.1.02

Keywords:

racing game, PPO, GAIL, behavioral cloning, AI in games.

Abstract

In this article, we present a comparative study of Artificial Intelligence training methods, in the context of a racing video game. The algorithms Proximal Policy Policy Optimization (PPO), Generative Adversarial Imitation Learning (GAIL) and Behavioral Cloning (BC), present in the Machine Learning Agents (ML-Agents) toolkit have been used in several scenarios. We measured their learning capability and performance in terms of speed, correct level traversal, number of training steps required and we explored ways to improve their performance. These algorithms prove to be suitable for racing games and the toolkit is highly accessible within the ML-Agents toolkit.

Received by the editors: 23 September 2021.

2010 Mathematics Subject Classification. 91A10, 68T05.

1998 CR Categories and Descriptors. I.2.1 [Artificial intelligence]: Applications and Expert Systems – Games; K.8.0 [Personal computing]: General – Gaming.

References

Berndt, C., Watson, I., and Guesgen, H. Oasis: an open ai standard interface specification to support reasoning, representation and learning in computer games. In IJCAI-05 Workshop on Reasoning, Representation, and Learning in Computer Games (2005), Citeseer, pp. 19–24.

Bhattacharyya, R., Wulfe, B., Phillips, D., Kuefler, A., Morton, J., Senanayake, R., and Kochenderfer, M. Modeling human driving behavior through generative adversarial imitation learning. arXiv preprint arXiv:2006.06412 (2020).

Bojarski, M., Del Testa, D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L. D., Monfort, M., Muller, U., Zhang, J., et al. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316 (2016).

Fan, X., Wu, J., and Tian, L. A review of artificial intelligence for games. Artificial Intelligence in China (2020), 298–303.

Giusti, A., Guzzi, J., Cires¸an, D. C., He, F.-L., Rodr´ıguez, J. P., Fontana, F., Faessler, M., Forster, C., Schmidhuber, J., Caro, G. D., Scaramuzza, D., and Gambardella, L. M. A machine learning approach to visual perception of forest trails for mobile robots. IEEE Robotics and Automation Letters 1, 2 (2016), 661–667.

Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. Advances in neural information processing systems 27 (2014).

Ho, J., and Ermon, S. Generative adversarial imitation learning. Advances in neural information processing systems 29 (2016), 4565–4573.

Juliani, A., Berges, V.-P., Teng, E., Cohen, A., Harper, J., Elion, C., Goy, C., Gao, Y., Henry, H., Mattar, M., et al. Unity: A general platform for intelligent agents. arXiv preprint arXiv:1809.02627 (2018).

Kreminski, M., Samuel, B., Melcer, E., and Wardrip-Fruin, N. Evaluating AI-based games through retellings. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (2019), vol. 15, pp. 45–51.

Kuefler, A., Morton, J., Wheeler, T., and Kochenderfer, M. Imitating driver behavior with generative adversarial networks. In 2017 IEEE Intelligent Vehicles Symposium (IV) (2017), IEEE, pp. 204–211.

Nandy, A., and Biswas, M. Unity ml-agents. In Neural Networks in Unity. Springer, 2018, pp. 27–67.

Perez-Liebana, D., Liu, J., Khalifa, A., Gaina, R. D., Togelius, J., and Lucas, S. M. General video game ai: A multitrack framework for evaluating agents, games, and content generation algorithms. IEEE Transactions on Games 11, 3 (2019), 195–214.

Quang Tran, D., and Bae, S.-H. Proximal policy optimization through a deep reinforcement learning framework for multiple autonomous vehicles at a non-signalized intersection. Applied Sciences 10, 16 (2020), 5722.

Rollings, A., and Adams, E. Andrew Rollings and Ernest Adams on game design. New Riders, 2003.

Safadi, F., Fonteneau, R., and Ernst, D. Artificial intelligence in video games: Towards a unified framework. International Journal of Computer Games Technology (2015).

Samak, T. V., Samak, C. V., and Kandhasamy, S. Robust behavioral cloning for autonomous vehicles using end-to-end imitation learning. arXiv preprint arXiv:2010.04767 (2020).

Sander, R. Emergent autonomous racing via multi-agent proximal policy optimization. Embodied Intelligence (2020).

Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017).

Torabi, F., Warnell, G., and Stone, P. Behavioral cloning from observation. arXiv preprint arXiv:1805.01954 (2018).

Yannakakis, G. N. Game ai revisited. In Proceedings of the 9th conference on Computing Frontiers (2012), pp. 285–292.

Zhu, J., Villareale, J., Javvaji, N., Risi, S., L¨owe, M., Weigelt, R., and Harteveld, C. Player-ai interaction: What neural network games reveal about AI as play. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (2021), pp. 1–17.

Downloads

Published

2022-07-03

How to Cite

PRĂJESCU, I., & CĂLIN, A. D. (2022). Multiple Types of AI and Their Performance in Video Games. Studia Universitatis Babeș-Bolyai Informatica, 67(1), 21–36. https://doi.org/10.24193/subbi.2022.1.02

Issue

Section

Articles