ENHANCING VIDEO GAMES POLICY BASED ON LEAST-SQUARES CONTINUOUS ACTION POLICY ITERATION: CASE STUDY ON STARCRAFT BROOD WAR AND GLEST RTS GAMES AND THE 8 QUEENS BOARD GAME

Enhancing Video Games Policy Based on Least-Squares Continuous Action Policy Iteration: Case Study on StarCraft Brood War and Glest RTS Games and the 8 Queens Board Game

Enhancing Video Games Policy Based on Least-Squares Continuous Action Policy Iteration: Case Study on StarCraft Brood War and Glest RTS Games and the 8 Queens Board Game

Blog Article

With the rapid advent of video games recently and the increasing numbers of players and gamers, only a tough game with high policy, actions, and tactics survives.How the game responds to opponent actions is the key issue Goalie - Equipment Bags of popular games.Many algorithms were proposed to solve this problem such as Least-Squares Policy Iteration (LSPI) and State-Action-Reward-State-Action (SARSA) but they mainly depend on discrete actions, while agents in such a setting have to learn from the consequences of their continuous actions, in order to maximize the total Construction Vehicles reward over time.So in this paper we proposed a new algorithm based on LSPI called Least-Squares Continuous Action Policy Iteration (LSCAPI).

The LSCAPI was implemented and tested on three different games: one board game, the 8 Queens, and two real-time strategy (RTS) games, StarCraft Brood War and Glest.The LSCAPI evaluation proved superiority over LSPI in time, policy learning ability, and effectiveness.

Report this page