Implementation of a reinforcement learning system with deep q network algorithm in the amc dash mark i game

Main Article Content

Wargijono Utomo

Abstract

Reinforcement learning is a branch of artificial intelligence that trains algorithms using a trial-and-error system. Reinforcement learning interacts with its environment and observes the consequences of its actions in response to rewards or punishments received. Reinforcement Learning uses information from every interaction with its environment to update its knowledge. The problem identified from this research is the lack of consistency, which is not always the same for Non-Player Characters (Agents) in the process of exploring an environment (Game environment). This research uses the Software Development Life Cycle (SDLC) Waterfall model method to train Non Player Characters (Agents) in the Amc Dash Mark I Game which uses the Deep Q Network (DQN) algorithm in several stages. Training results show improvements in model performance over time. The average duration of the episode and average reward episode showed an increase of 7.75 to 24.7, while the exploration rate decreased to 0.05. This indicates that the model has experienced learning and is improving to achieve better rewards by performing fewer actions. The lower loss also shows that the model has succeeded in reducing prediction errors and improving prediction capabilities.

Downloads

Download data is not yet available.

Article Details

How to Cite
[1]
W. Utomo, “Implementation of a reinforcement learning system with deep q network algorithm in the amc dash mark i game”, J. Soft Comput. Explor., vol. 5, no. 1, pp. 18-25, Mar. 2024.
Section
Articles

References

A. A. Nurdin, G. N. Salmi, K. Sentosa, A. R. Wijayanti, and A. Prasetya, “Utilization of Business Intelligence in Sales Information Systems,” J. Inf. Syst. Explor. Res., vol. 1, no. 1, pp. 39–48, Dec. 2022, doi: 10.52465/joiser.v1i1.101.

A. Dwiantoro, I. Maulana, N. P. Damayanti, and R. N. Al Zahra, “Artificial intelligence (AI) imaging for enhancement of parking security,” J. Student Res. Explor., vol. 1, no. 1, pp. 15–20, Dec. 2022, doi: 10.52465/josre.v1i1.110.

M. Mustofa, J. L. Putra, and C. Kesuma, “Penerapan Game Development Life Cycle Untuk Video Game Dengan Model Role Playing Game,” Comput. Sci., vol. 1, no. 1, pp. 27–34, 2021, doi: 10.31294/coscience.v1i1.158.

R. C. Gabrito, R. Y. I. Jr., and J. F. P. Velza, “Impact of Online Gaming on the Academic Performance of DEBESMSCAT-Cawayan Campus Students,” Sci. J. Informatics, vol. 10, no. 4, pp. 423–434, 2023.

R. J. Jordy, H. Marcos, J. Wijaya Kusuma, D. Intan Surya Saputra, and P. Purwadi, “Game design documents for mobile elementary school mathematic educative games,” J. Soft Comput. Explor., vol. 4, no. 2, May 2023, doi: 10.52465/joscex.v4i2.129.

J. Arjoranta, “How to Define Games and Why We Need to,” Comput. Games J., vol. 8, no. 3–4, pp. 109–120, 2019, doi: 10.1007/s40869-019-00080-6.

Y. Zhao, Y. Wang, Y. Tan, J. Zhang, and H. Yu, “Dynamic Jobshop Scheduling Algorithm Based on Deep Q Network,” IEEE Access, vol. 9, pp. 122995–123011, 2021, doi: 10.1109/ACCESS.2021.3110242.

T. Hazra and K. Anjaria, Applications of game theory in deep learning: a survey, vol. 81, no. 6. Multimedia Tools and Applications, 2022. doi: 10.1007/s11042-022-12153-2.

R. Köster and M. J. Chadwick, “What can classic Atari video games tell us about the human brain?,” Neuron, vol. 109, no. 4, pp. 568–570, 2021, doi: 10.1016/j.neuron.2021.01.021.

A. Fawzi et al., “Discovering faster matrix multiplication algorithms with reinforcement learning,” Nature, vol. 610, no. 7930, pp. 47–53, 2022, doi: 10.1038/s41586-022-05172-4.

J. Montalvo, Á. García-Martín, and J. Bescós, “Exploiting semantic segmentation to boost reinforcement learning in video game environments,” Multimed. Tools Appl., vol. 82, no. 7, pp. 10961–10979, 2023, doi: 10.1007/s11042-022-13695-1.

K. Shao, Z. Tang, Y. Zhu, N. Li, and D. Zhao, “A Survey of Deep Reinforcement Learning in Video Games,” no. 61573353, pp. 1–13, 2019.

A. S. Dharma and V. Tambunan, “Penerapan Model Pembelajaran dengan Metode Reinforcement Learning Menggunakan Simulator Carla,” J. Media Inform. Budidarma, vol. 5, no. 4, p. 1405, 2021, doi: 10.30865/mib.v5i4.3169.

J. Ibarz, J. Tan, C. Finn, M. Kalakrishnan, P. Pastor, and S. Levine, “How to train your robot with deep reinforcement learning: lessons we have learned,” Int. J. Rob. Res., vol. 40, no. 4–5, pp. 698–721, 2021, doi: 10.1177/0278364920987859.

D. Marwah, S. Srivastava, A. Gupta, and S. Verma, “Chrome Dino Run using Reinforcement Learning CS7IS2 Project (2019-2020),” 2020.

M. J. Gomez, J. A. Ruipérez-Valiente, P. A. Martínez, and Y. J. Kim, “Applying learning analytics to detect sequences of actions and common errors in a geometry game,” Sensors (Switzerland), vol. 21, no. 4, pp. 1–16, 2021, doi: 10.3390/s21041025.

C. Marín-Lora, M. Chover, J. M. Sotoca, and L. A. García, “A game engine to make games as multi-agent systems,” Adv. Eng. Softw., vol. 140, no. September 2019, p. 102732, 2020, doi: 10.1016/j.advengsoft.2019.102732.

M. R. Islam, R. Rahman, A. Ahmed, and R. Jany, “NFS: A Hand Gesture Recognition Based Game Using MediaPipe and PyGame,” 2022.

S. Gronauer and K. Diepold, Multi-agent deep reinforcement learning: a survey, vol. 55, no. 2. Springer Netherlands, 2022. doi: 10.1007/s10462-021-09996-w.

A. Ardiansyah and E. Rainarli, “Implementasi Q-Learning dan Backpropagation pada Agen yang Memainkan Permainan Flappy Bird,” J. Nas. Tek. Elektro dan Teknol. Inf., vol. 6, no. 1, pp. 1–7, 2017, doi: 10.22146/jnteti.v6i1.287.

I. Agustian et al., “Robot Obstacle Avoidance Dengan Algoritma Q-Learning,” J. TEKTRIKA, vol. 05, no. 02, pp. 61–70, 2020.

Á. L. Valdivieso Caraguay, J. P. Vásconez, L. I. Barona López, and M. E. Benalcázar, “Recognition of Hand Gestures Based on EMG Signals with Deep and Double-Deep Q-Networks,” Sensors, vol. 23, no. 8, pp. 1–18, 2023, doi: 10.3390/s23083905.

M. H. Alabdullah and M. A. Abido, “Microgrid energy management using deep Q-network reinforcement learning,” Alexandria Eng. J., vol. 61, no. 11, pp. 9069–9078, 2022, doi: 10.1016/j.aej.2022.02.042.

S. Cano, N. Araujo, C. Guzman, C. Rusu, and S. Albiol-Pérez, “Low-cost assessment of user experience through EEG signals,” IEEE Access, vol. 8, pp. 158475–158487, 2020, doi: 10.1109/ACCESS.2020.3017685.

A. B. Harisa and W. K. Tai, “Pacing-based Procedural Dungeon Level Generation: Alternating Level Creation to Meet Designer’s Expectations,” Int. J. Comput. Digit. Syst., vol. 12, no. 1, pp. 401–416, 2022, doi: 10.12785/ijcds/120132.

C. R. Harris et al., “Array programming with NumPy,” Nature, vol. 585, no. 7825, pp. 357–362, 2020, doi: 10.1038/s41586-020-2649-2.

Muhammad Romzi and B. Kurniawan, “Pembelajaran Pemrograman Python Dengan Pendekatan Logika Algoritma,” JTIM J. Tek. Inform. Mahakarya, vol. 03, no. 2, pp. 37–44, 2020.

Abstract viewed = 283 times