Show simple item record

dc.contributor.advisorMadden, Michael G.
dc.contributor.authorGlavin, Frank G.
dc.date.accessioned2016-01-28T09:27:24Z
dc.date.available2016-01-28T09:27:24Z
dc.date.issued2015-09-30
dc.identifier.urihttp://hdl.handle.net/10379/5500
dc.description.abstractReinforcement learning (RL) is a paradigm which involves an agent interacting with an environment. The agent carries out actions in the environment and receives positive reinforcement for actions that are deemed “good” and penalties for “bad” actions based on a reward signal. The goal of the learning agent is to maximise the amount of reward it receives over time. This thesis presents several new behavioural architectures for controlling non-player characters (NPCs) in a modern first-person shooter (FPS) game using reinforcement learning. NPCs are computer-controlled players that are traditionally programmed with scripted, deterministic behaviours. We propose the use of reinforcement learning to enable the NPC to learn its own strategies and adapt them over time. We hypothesise that this will lead to greater variation in gameplay and produce less predictable NPCs. The first contribution of this thesis is the design, development and testing of two general purpose Deathmatch behavioural architectures called Sarsa-Bot and DRE-Bot. These architectures use reinforcement learning to control and adapt their behaviour. We demonstrated that they could learn to play competently and achieve good performance against fix-strategy scripted opponents. Our second contribution is the development of a reinforcement learning architecture, called RL-Shooter, specifically for the task of shooting. The opponent's movements are read in real-time and the agent chooses shooting actions based on those that caused the most damage to the opponent in the past. We carried out extensive experimentation that showed that the RL-Shooter architecture could produce varied gameplay, however, there was not a clear upward trend in performance over time. This led to our third contribution which involved developing extensions to the SARSA(λ) algorithm called Periodic Cluster-Weighted Rewarding and Persistent Action Selection. We designed these to improve the learning performance of RL-Shooter and we demonstrated that the use of the techniques resulted in a clear upward trend in the percentage hit accuracy achieved over time. Our final contribution is a skill-balancing mechanism that we developed, called Skilled Experience Catalogue, which is based on a by-product of the learning process. The agent systematically stores “snapshots” of what it has learned during the different stages of the learning process. These can then be loaded during the game in an attempt to closely match the abilities of the current opponent. We showed that the technique could successfully match the skill level of five different scripted opponents with varying difficulty settings.en_IE
dc.subjectReinforcement learningen_IE
dc.subjectArtificial intelligenceen_IE
dc.subjectNon-player charactersen_IE
dc.subjectComputer gamesen_IE
dc.subjectFirst person shooteren_IE
dc.subjectInformation technologyen_IE
dc.subjectInformaticsen_IE
dc.subjectEngineering and Informaticsen_IE
dc.titleTowards inherently adaptive first person shooter agents using reinforcement learningen_IE
dc.typeThesisen_IE
dc.contributor.funderHigher Education Authority (HEA)en_IE
dc.local.noteThis research involves the design and development of several novel behavioural architectures for computer-controlled agents in modern computer games. Specifically, new reinforcement learning techniques are used to enable the agents to learn and adapt their in-game behaviour in order to generate more interesting and diverse game-play for human players.en_IE
dc.local.finalYesen_IE
nui.item.downloads5479


Files in this item

Attribution-NonCommercial-NoDerivs 3.0 Ireland
This item is available under the Attribution-NonCommercial-NoDerivs 3.0 Ireland. No item may be reproduced for commercial purposes. Please refer to the publisher's URL where this is made available, or to notes contained in the item itself. Other terms may apply.

The following license files are associated with this item:

Thumbnail

This item appears in the following Collection(s)

Show simple item record