AI for MMO Strategy Games

Size: px
Start display at page:

Download "AI for MMO Strategy Games"

Transcription

1 AI for MMO Strategy Games Alexandre Miguel Serra Vicente Barata Dissertação para a obtenção de Grau de Mestre em Engenharia Informática e de Computadores Júri Presidente: Orientador: Co-orientador: Vogal: Prof. José Carlos Alves Pereira Monteiro Prof. Pedro Alexandre Simões dos Santos Prof. Rui Filipe Fernandes Prada Prof. Daniel Jorge Viegas Gonçalves Abril 2012

2 2

3 Abstract This work created a framework for versatile agents capable of playing Massive Multiplayer Online Turn-Based Strategy (MMOTBS) games. These agents work together to produce human-like behaviour without cheating of any sort. A typical problem in this game genre is the deturpation of strategy caused by players who quit the game after playing some turns, leaving their game assets unattended and able to be exploited by nearby players who then get an unfair advantage. This work s specialist agents can mitigate this problem by replacing the inactive players. In order to achieve these objectives, expert human players gameplay was analysed and decomposed into domains of competence (economic, military and diplomatic skills) that each specialist agent focused on. With a particular focus on the economic agent, this work used planning and expert players domain knowledge (as well as pathfinding techniques and influence maps in the military agent) to create AI capable of adapting to any game situation. The obtained results showed that the economic agent was successful, being able to compete with human players on equal terms, and that combined with the other agents, the AI system as a whole had comparable performance to human players and superior performance to inactive players. Keywords Adaptability, Agents, Artificial Intelligence, Massive Multiplayer Online game, Planning, Turn-based Strategy game i

4 ii

5 Resumo Este trabalho criou uma framework versátil de agentes capazes de jogar jogos online multijogador massivos de estratégia baseados em turnos (MMOTBSGs). Estes agentes trabalham em conjunto de forma a jogarem de forma semelhante a jogadores humanos, sem utilização de batotas. Um problema típico neste género de jogos é a deturpação da estratégia de jogo causada por jogadores que abandonam o jogo a meio (jogadores inactivos), fazendo com as suas posses sejam tomadas sem risco de retaliação por jogadores que se apercebam da situação. Os agentes criados por este trabalho resolvem este problema substituindo os jogadores inactivos. Para criar estes agentes foram analisados vários jogadores experientes, cujas decisões se fazem utilizando diferentes competências largamente independentes entre si (economia, exército e diplomacia). Imitando o modo de pensar dos jogadores humanos, foram criados vários agentes, cada um especializado em sua competência. Destes, o mais desenvolvido é o agente responsável pela economia, que utiliza técnicas de procura, planeamento e conhecimento de domínio para cumprir os seus objectivos. Também importante é o agente responsável pelo exército, que utiliza técnicas de pathfinding e mapas de influência para além das técnicas já utilizadas pelo agente económico. Os resultados obtidos mostram que o sistema de inteligência artificial criado (resultante da combinação de todos os agentes) jogou a um nível comparável aos jogadores humanos activos e superior aos jogadores inactivos. Palavras-chave Adaptabilidade, Agentes, Inteligência Artificial, Jogo de Estratégia baseado em Turnos, Jogo Online Multi-jogador Massivo, Planeamento iii

6 iv

7 Acknowledgements To my girlfriend and my family for their endless support and love when I needed it the most. To my friends for being there and pointing the right path when needed. To Pedro Santos for his guidance over the long journey that was creating this work. To Rui Prada for his technical pragmatism and ideas which helped improve this work. v

8 vi

9 Contents Abstract Resumo i ii 1 Introduction Contributions Outline Related Work Related Games Online Ikariam Travian Planetarion Civilization Conclusions AI Technology Finite State Machines Planning State-Space Search Planning A* Partial Order Planning Integrated Rule Based Systems Goal Based Systems Egos Terrain Analysis vii

10 viii CONTENTS Influence Maps Waypoints Almansur Game Types Static game Dynamic game Historical game Fantasy game Terrain Races/Cultures Economy Resources Market Military Contingents Battle Phases Orders Speeds Combat Diplomacy Almansur AI Agents Almansur Conceptual Model Almansur Agent Architecture Strategy Agent Economic Decision Tree Military Decision Tree Economy Agent Economic Planning Cost Heuristic State Evaluation Function

11 CONTENTS ix 4.5 Military Agent Influence Maps Military Decisions Military Facility Module Military Recruitment Module Military Command Module AI configuration parameters Results Static Historical Game Static Test I (STI) STI Conclusions Dynamic Fantasy Games Dynamic Test I (DTI) DTI Conclusions Dynamic Test II (DTII) DTII Conclusions Dynamic Test III (DTIII) Dynamic Test III AI Race comparisons DTIII Conclusions Conclusions and Future Work 83 Bibliography 86

12 x CONTENTS

13 List of Figures Online s initial map Ikariam s island map Map provided by Travian Map Tools to detect inactive(idle) players Planetarion s resource tab Civilization overview map Example of a Finite State Machine A Influence Map A Waypoint Example Part of a victory point report in Almansur Historical scenario example The economic tab in Almansur Market example Almansur s military map example Almansur Battle example Almansur s military conceptual map A diplomatic map in Almansur The diplomatic messages tab Almansur s conceptual map The AI Player s Architecture The decision tree for defining economic goals The decision tree for defining military goals The Military Agent s architecture Elf recruitment options example xi

14 xii LIST OF FIGURES 4.7 Example of an contingent with 1.5 experience AI Conquest Planning Example Example of orders given by the AI for a turn Static Game signup example Static Test I Victory Points Static Test I AI Army/Terr Points Static Test I Economic Investment Points AI Food Production by territory example Static Test I Military Investment Points Static Test I Income/Upkeep Points Dynamic Test I Victory Points Dynamic Test I AI Army/Terr Points Dynamic Test I Economic Investment Points Dynamic Test I Military Investment Points Dynamic Test I Income/Upkeep Points DTII elf/dwarf conquest patterns Dynamic Test II Victory Points Dynamic Test II AI Army/Terr Points Dynamic Test II Economic Investment Points Dynamic Test II Military Investment Points Dynamic Test II Income/Upkeep Points Dynamic Test III Victory Points Dynamic Test III AI Army/Terr Points Dynamic Test III Economic Investment Points Dynamic Test III Military Investment Points Dynamic Test III Income/Upkeep Points Dynamic Test III Victory Points by Race

15 List of Tables 3.1 Almansur Terrain Types Race/culture type example for Fantasy Games Almansur Resource Types Elvish contingent type characteristics Major contingent attributes and their purpose Battle phases Almansur Orders Almansur Speeds Diplomatic Relationships Economic Agent State Data Economic Agent Action Types Military Agent State Data xiii

16 xiv LIST OF TABLES

17 Chapter 1 Introduction Artificial Intelligence is an expanding computer science field that found innumerable applications since its official birth at the Dartmouth Summer Research Conference on Artificial Intelligence [24]. Out of the many AI research fields, games are an especially interesting test field for new AI techniques as they provide a rich, safe environment where new ideas can be put in practice and thoroughly tested before being adapted into real life situations [8]. Players themselves also demand more and better AI on their games, which in turn has caused game developers to realize that investing in AI development is essential for the success of their games. As such, every year sees a higher focus on game AI development than the year before, even during times of economic crisis [16]. Game AI can take several forms and purposes. Sometimes game AI s purpose is purely to advise the player (e.g. SimCity s advisors), sometimes its purpose is to allow the players to focus on high-level strategy by automating lower-level tasks (e.g. game pathfinding), and sometimes its purpose is to challenge players (the AI-controlled rival nations in the Civilization series). Good AI is particularly important in single player games, where it must provide adequate challenges to the player all by itself. However, even in multiplayer games (such as the MMOTBS game, Almansur), AI can play a major role in solving specific problems as this work will show. Massive Multiplayer Online games (MMOs) are typically online only games played by massive amounts of players, with focus on the (social) interactions between these. Rather than copies sold, the success of these games is measured in terms of regular subscribers. Thus, MMOs are usually works in constant progress, with new content or changes to existing one being 1

18 2 CHAPTER 1. INTRODUCTION released at regular intervals. Turn-based Strategy games (TBSs), as the name suggests, are strategy games (therefore involving long term planning from players and potentially resource / army management) where player actions are frozen in time until their turn ends, at which time all frozen actions are executed simultaneously. Massive Multiplayer Online Turn Based Strategy games (MMOTBSs), born from the fusion of the two genres presented before, share their characteristics as well. The main goal of this work was to create AI capable of playing in the complex MMOTBS world of Almansur with a performance comparable to human players and superior to inactive players in, at least, its first 10 turns. As a game from the MMO genre, Almansur is in constant change, and like many other games of its specific genre, it has a problem with inactive players. Inactive players are players who, after having started to play a specific game, stop playing at some point, giving no orders for two or more turns in a row. Due to Almansur s conquest based gameplay, active players near inactive players could easily take their territories without a fight, earning an unfair advantage for the rest of the game. One way to solve the inactive player problem presented above is to replace inactive players with AI players. In order to be successful replacements for inactive players, AI players must play in a way similar to human players that makes it hard to distinguish between AI players and human players for other human players. Otherwise, they will be attacked by groups of human players, who have the advantage of knowing their opponent is an AI player. This requirement makes this work s objectives share some similarities with passing a Turing test [36]. The Turing test is a proposal made by Alan Turing in the 1950s, consisting in having a human expert inspector interviewing two parties through natural language, one comprised by a human and the other by a computer. The test would be successfully passed by the AI if the human inspector was unable to determine which party was the computer at the end of the test. Although there has been much academic research on AI that could pass the Turing test [28, 12, 14, 19], up until now this kind of high level cognitive ability has not been delivered by academic research. Instead, several topics of discussion have arisen about whether this definition of intelligence is inherently flawed [11].

19 1.1. CONTRIBUTIONS 3 In order to improve the believability of the AI, it was decided from an early date that the created system should not be able to cheat. Although many strategy games use cheating AI [21, 10], usually in the form of information that far surpasses what human players in the same situation can access [34], these systems can lead to unrealistic behavior where the AI has unfair advantages that players resent [27]. Another important requirement for the AI was adaptability: as the game changes so should the AI automatically adapt. This characteristic makes maintaining the AI more manageable for the game designers, who don t want to be worried with the impact that their changes may have on the separate AI system. The creation of an AI player to replace inactive players may also allow Almansur to be played in single player mode, and allow mixed matches where several humans and AIs compete. Finally, it should be noted that using artificial agents to implement the AI system in question was a very interesting prospect for this work: it is natural to classify players as game agents [38]. The definition of agent in [37]: An agent is a computer system that is situated in some environment, and that is capable of autonomous action in this environment in order to meet its design objectives also fits this work s objectives quite well because, as described above, the AI system is required to be able to act separately from the game in order to replace the inactive players. 1.1 Contributions The main contributions of this work are: A multi-level Agent-based Artificial Inteligence framework for the MMOTBSG game Almansur, which includes specialized goal-based planners for each agent. This framework was prepared to be easily extensible at several different levels: one can extend the goal system using the existing planner configurations in order to improve the AIs behavior without needing to understand the planner s structure, for example. A publication, AI for MMO Strategy Games [9], accepted and presented at AIIDE (AI and Interactive Digital Entertainment Conference) 2011 at Stanford University, USA, with good reception.

20 4 CHAPTER 1. INTRODUCTION An operational and tested AI system for Almansur which can be integrated with the game from the next version s (Almansur 2.2) release. 1.2 Outline 1. The Related Work chapter will present and discuss how other authors works relate to this one and what lessons can be learned from them. For this purpose this chapter is split into an an analysis of games with problems similar to this work s motivational problem (inactive, yet still present players in the world) and an analysis of various techniques that could be useful in creating an AI system capable of solving this problem. 2. The Almansur chapter will present the basics of the game that the AI player was built for. 3. The Almansur AI Agents chapter will present and discuss the game s conceptual map, the architecture and implementation details of the created AI player, with focus on each individual component and their specifics (details of each planner and how goals interact with them, for example). 4. The Results chapter will present and discuss the testing methodology and results obtained during the testing phase of this work. 5. The Conclusions and Future Work will present and discuss future work topics and particular aspects of the AI player s implementation that could have been done differently.

21 Chapter 2 Related Work This chapter is composed of two sections, Related Games and AI Technology. In Related Games several games from the MMOTBSGs genre were researched. None of these games used autonomous AI players, relying instead on other means to accomplish the same objectives which are to be solved with AI in this work. Therefore, the related games research was expanded to include Massive Multiplayer Online Real Time Strategy Games (henceforth referred to as MMORTSGs) and one Turn Based Strategy Game (TBSG) with AI players and advisors. This section will present each game and discuss its particular solution to the inactive player problem they all share as similar games to Almansur. In AI Technology various researched technology is presented and the pros and cons of each technique are discussed as well as potential practical applications for them in this work. 2.1 Related Games To the author s knowledge, there is a lack of AI developed for MMOTBSGs. This is due to a few factors: MMOTBSGs tend to be resource hogs, with severe scalability issues due to the massive amount of players per game, which tends to limit the amount of players that can play at the same time. If there is no space for all the humans who want to play, why even consider AI? MMOTBSGs tend to emphasize interaction between the players, with resource trading, 5

22 6 CHAPTER 2. RELATED WORK teamwork and communication being major factors, which would make the required AI either too complex or ineffective at its job. Inactive players are generally not too much of an issue in most MMOTBSGs (because most do not have realistic space representation), they are a part of the game instead and help newer players in their early play while not being major factors in the mid/late game Online 1483 Online is a free online MMOTBSG where the objective, like in Almansur, is to achieve map control through war and diplomacy between players. Figure 2.1: 1483 Online s initial map In order to prevent inactive players from ruining the game for others, 1483 Online s system automatically kicks out players who miss giving any orders to their nation for two turns in a row, allowing new players to take control of the Lands who were being controlled by the inactive player. However, this is not a problem-free solution, as players who are losing tend to simply fake inactivity in order to get booted so they can join again. In order to minimize this exploit, players who get booted due to inactivity are penalized on their reliability status, which developers intend to use in order to group reliable players together in the future games1. 1

23 2.1. RELATED GAMES Ikariam Ikariam is a free online MMORTSG where the objective is to control and expand an initially small town into a big powerhouse. New players are given control of one town in one of the game s islands. Ikariam s world is split into islands, each of which can have up to 16 player towns. The objective of each game is, like in Almansur, to have the maximum overall score possible by the time the game ends. Unlike Almansur, however, there are no distinct races to pick from. Instead, all players have the same options at the same power, only modified by their relative evolution in the game. Also another difference is that players in Ikariam cannot conquer other player s cities. Figure 2.2: Ikariam s island map In Ikariam, if you do not give any orders for a certain amount of turns, you are tagged as being inactive. Inactive players have (i) added after their names, which are partially greyed out in island view. An example of this can be seen in figure 2.2, as the player av1 playing on the eastern most position is tagged as inactive. After a certain period under public inactivity status, if further orders have not been submitted, the account will simply be deleted. The time it takes for a account to turn inactive after its player stopped giving orders and the time it takes for that account to be deleted after being tagged inactive depends 2 on for how long the player has played the game so far, the more time he has played under that account, the more days he will have to get back to his account before being labelled inactive and, 2 player

24 8 CHAPTER 2. RELATED WORK eventually, deleted. Due to the way Ikariam works, an attacker will only get some resources off the inactive player on a successful attack, rather than all of his territories and their production. Besides, the automatic resource production of inactive players is reduced to half, which greatly lessens the impact of inactive players on the game flow, they simply serve as a way for new players to get a slight jump on resources on their early stages. The game isn t affected much by inactive players. Instead, they have become a natural part of it, and are simply removed after a while at no loss to the gameplay for the other, active, players. Thus, there is currently little motivation for Ikariam s developers to research and create AI that could take over a player after he was deemed inactive Travian Travian is, much like Ikariam, a MMORTSG where players control and develop a village, growing from the tiny starting population of two inhabitants and no buildings into a huge village. At the end stages of the game a World Wonder must be built and upgraded to level 100 and the first player to do so, wins the game. Much like Almansur there are several different races to pick from, each with their own special abilities and characteristics. Unlike Almansur, however, you do not actually conquer other villages, but merely steal some of their resources on each successful attack. Travian treats inactive players much in the same fashion as Ikariam. The major difference between Ikariam s and Travian s treatment of inactive players is that while Ikariam automatically identifies and tags inactive players for other players in the game to see, Travian simply deletes accounts that have been inactive for a certain period of time (usually two weeks). Finding out who the inactive players are is an integral part of the game, to the extent that external tools have been developed to help players know which of their neighbours are inactive for farming purposes 3. Farming is regular raiding of inactive players for resources at no danger of retaliation to the attacker. The possibility of farming inactive players has a negative impact in the game, but not nearly as much as it has in Almansur, where a new player 3

25 2.1. RELATED GAMES 9 Figure 2.3: Map provided by Travian Map Tools to detect inactive(idle) players entering the game expands the map with new powerful territories that can easily double a strong player s already significant income. In Travian assaulting inactive players is merely a way for new players to get a little boost in their early game, as the resources won by raiding a inactive player won t be significant (especially because of being immediately farmed by all of its neighbours). There isn t, therefore, much motivation for Travian s developers to invest in better ways to handle their inactive player base as these simply become part of the game s flow and have little to no effect past the early game Planetarion Planetarion is a space themed, tick 4 based MMOG where players control and develop their planet, by gathering and stealing asteroids which can later be mined for resources. With these, facilities and fleets of spaceships can be built that are then used to get more asteroids by attacking other player s planets. Players can pick from several different races when joining a game, each with their own special characteristics and unique ships. Unlike Almansur, however, players cannot actually conquer other planets, but merely steal some of their asteroids on each successful attack, thus earning more resources. The ultimate objective, like in Almansur, is to have the highest possible score at the end of the game. 4 A special resource that represents time - systems in games#sub-types

26 10 CHAPTER 2. RELATED WORK Figure 2.4: Planetarion s resource tab At first Planetarion did not have any special measures in effect to deal with inactive players. Instead, they were treated just like active players, who would take advantage by stealing the inactive player s asteroids with impunity. Since round 22 in 2007, however, inactive players in a Planetarion game are automatically sent to a inactive part of the universe, where they are far away from active players and thus, unlikely to get raided Civilization Civilization is a TBSG where players build a civilization up from a single city. A standard game takes place from 4000 BC up to 2050 AD, where the game ends and the nation with the highest score is declared the winner. There are also special conditions that can make the game end sooner (a player that manages to conquer all other players in the map, for instance) and there are several civilizations to pick from with their own characteristics and special units. These characteristics make Civilization a very similar game to Almansur. Unlike Almansur, however, Civ focuses on the single player facet of the game. Single player is possible due to Civ s AI opponents. The AI can also help the player manage low level tasks by assigning workers to the best available tiles for example and several advisors for each facet of the game are present. Civ s AI is adjustable through parameters, uses fuzzy logic [10] and cheats in order to achieve its ultimate goal: to be fun [21]. 5

27 2.1. RELATED GAMES 11 Figure 2.5: Civilization overview map The fourth installment of the series, CivIV, also has a secondary multiplayer component for up to eighteen players. As Civ is a similar game to Almansur, it shares some of the same concerns - an inactive player in multiplayer has a very negative impact on the game if ignored. The inactive player problem is, however, easily solved in CivIV by using the AI. Whenever a player leaves the game the other players are prompted to vote over whether to wait for him to return, to save the game until that player returns or to simply replace him with an AI player. Players joining a game can take over previously AI-controlled nations as well. CivIV players can also focus on high level tasks by delegating low level tasks to the AI. Although Civ s AI has solved a lot of problems for the game, it cannot be easily adapted to Almansur as Civ is not a MMO game in constant evolution, and thus its developers scripted much of the race/resource/facility specific decisions, making the hard task of adapting a game AI to another game even harder [22]. As these factors are in constant change in Almansur, the AI is required to not rely on scripting and rather do a more versatile approach able to adapt on the fly Conclusions The first game we analysed, 1483 Online, solved the inactive player problem by allowing new players to replace inactive ones. This solution isn t ideal for Almansur, as removing the

28 12 CHAPTER 2. RELATED WORK player from the game would still leave the former territories of that player free for the taking. While allowing a new player to take on the Land of the booted player could help solving the problem, it would also create new problems such as players trading control of Land s between themselves and new players entering at a huge disadvantage if they take control of a inactive Land who didn t expand from the start and is now totally surrounded by enemies. Due to Almansur s design as a historical simulation game with realistic space representation [33], Ikariam s solution of simply removing inactive players from the game is also not suitable, as that would imply removing an active part of the world from the match and ruin the intended realism. Travian s solution of ignoring the inactive players is what has been used so far in Almansur, but it hasn t worked at all as it allows active players nearby to get significant unfair advantages as discussed before. Planetarion s solution of sending inactive players to a far away space, while certainly creative, doesn t make sense applied to Almansur s world. Because every individual territory in Almansur can be captured, moving a inactive player s Land and the territories that were created with his entrance in the game to a inactive, far away part of the map would imply ruining the realism of the game. 2.2 AI Technology MMOTBSG environments are complex and usually receive constant updates which modify the game rules. Thus, versatile systems that can adapt to game changes without requiring code changes are extremely valuable solutions when building AI systems to play in this kind of games. There are several techniques that can be helpful when creating AI systems. Several of them will be discussed in this section Finite State Machines Finite State Machines (abbreviated FSMs) such as the ones described in [10] are automata with a finite amount of possible states and transitions. Each state represents an action or behavior, while the transitions cause the active state to change to a different state when their

29 2.2. AI TECHNOLOGY 13 particular pre-conditions are fulfilled. Since equal pre-conditions always trigger equal transitions in FSMs, these are typically used to encode deterministic behavior. Figure 2.6: Example of a Finite State Machine An agent following this finite state machine would start in state q0, where it would wait for a 00 or 11 sequence in order to reach state q3. From state q1, any 01 or 10 sequence would lead back to q0. This example shows how easy it is to build a simple deterministic behavior with a FSM. The problem with FSM s is that when applied to complex problems, states and transitions multiply rapidly and can easily become too much to handle and maintain, especially in rapidly changing environments, thus they are best used as a simple solution to solve simple problems Planning Planning is the act of searching through possible actions on the context of a given world state in order to attain a certain goal [17]. Planning can be viewed as the declarative way of creating AI, as the world state is represented by logical symbolisms, which represent the knowledge available to the agent. Thus, a planner uses actions on the current world state, generating new world states as a result, where new actions can be applied which lead to new world states until eventually reaching a world state that satisfies its goals After reaching the goal state in the planner, all the actions that were required to get there

30 14 CHAPTER 2. RELATED WORK are compiled and returned, which becomes the agent s plan: by executing those actions in the real world, it is expected that the agent will reach its goals. If a goal state is not reached, the goals are considered unattainable by the agent. Planning systems decouple actions from goals making it simple to implement layered behaviors. This decoupling allows for new behaviors to added without requering changes to the core system, which is essential when considering AI for dynamic problems where the actions, goals and the state of the world itself change constantly. Some problems of planning systems are tied to the complexity of the planner engines, and to the difficulty of finding a symbolic representation that correctly expresses all the concepts present in a virtual world. It can also be very hard to plan adequately when goals are often unattainable from the agent s current context and when doing Online problem solving [6], that is, when the agent only has partial knowledge of the problem and its solution. The problem of planning in general is controlling the combinatorial explosion in complex problems where there are many applicable actions and where the distance in actions between the starting state and goal state is large. In order to minimize this problem, several measures can be taken: 1. Split the complex problem into less complex sub-problems. 2. Limit the amount of applicable actions. 3. Accept non-goal states as long as they are within an acceptable range of the goal. 4. Don t allow negative interactions between actions (as in, actions that cancel each other out in the same plan). Inside the planning genre, there are several different possible strategies [32]: State-Space Search Planning This straightforward approach consists in applying the available actions to the initial state, thus generating several new states. If any of these new states is a goal state the algorithm has found a solution and returns it, otherwise the search algorithm picks one of the new states

31 2.2. AI TECHNOLOGY 15 to apply the available actions to in hopes of generating a goal state. The process is repeated until a goal state is reached. This approach can go forward, starting from the current state of the agent and generating new states as described above until the desired goal is reached, or it can go backwards, starting from the desired goal and generating possible predecessors until the current state of the agent is reached. The problem of this particular approach to planning is that it can be extremely inefficient without a good heuristic function to guide the search algorithm. It is therefore, extremely important that the used heuristic is finely tuned when using state-space search planning. One of the most popular search algorithms is the A*, described in the next section A* A* is a search algorithm that searches through a graph and finds the path from the start node to the end node using a path-cost function (the cost of coming through the start node to the current node) and an heuristic that estimates the distance from the current node to the goal node [32]. It is a very adaptable, fast and easy to understand algorithm which is guaranteed to return optimal goal nodes when using an admissible heuristic (that is, a heuristic that never overestimates the distance to the goal). It also uses a path-cost function which represents the cost from the starting node to the current. The sum of both heuristic and path-cost function determines the best node to explore in order to find the optimal solution Partial Order Planning Partial order planning (henceforth referred to as pop planning) tries to decompose the goals into independent sub-goals that can be worked on separately. After being solved, the sub-goal s solutions can be combined into the solution to the original problem. The order in which the actions required to solve each sub-goal are executed is only

32 16 CHAPTER 2. RELATED WORK determined when it is absolutely necessary to do so, either due to conflicts between causal links 6 or when it is required to execute the plan. A conflict between casual links happens when the effects of an action negate the preconditions of at least another action that is supposed to happen at the same time. In order to solve this conflict the plan creates an ordering constraint 7 stating that the action that negates the precondition must come before the actions that enabled that precondition in the first place. There may be cases where solving a conflict is impossible because there is no place in the plan where the action can be executed that won t cause a new conflict. Detecting these cases early is important in order to prune state space that contains no solutions due to unsolvable conflicts. Actions that depend on variables are problematic to handle with this approach because they can cause conflicts when their variables have certain values, but not with other values. This causes conflict solving to be harder (although possible by adding a new type of constraint to stop variables from being instanciated to the conflicting values) and is a very common situation in Almansur as the effects / preconditions of the actions available usually depend on several variables (for instance buying at the market depends on the amount and type of resource that is being bought). Heuristics for this approach are also harder to tune correctly as states are not represented directly for every step of the search, thus making it harder to estimate how far the current state is from the goal. Although pop planning has some advantages over state space search planning, it also has some significant disadvantages in the context of Almansur. In pop planning goals are split into independent sub-goals that can be solved with minimal amounts of effort. However, in Almansur goals are not always solvable when starting from the agent s current situation. Goals are also rarely independent inside a 6 Causal Links connect actions whose effects solve other actions s preconditions to these 7 Ordering Constraints specify the order between specific actions

33 2.2. AI TECHNOLOGY 17 speciality (economy / military / diplomacy) as all economic goals, for example, share the Land s resources as do some military goals. Actions that depend on variables are harder to handle with pop planning because of their ambiguous conflict nature, some parametrizations may cause conflicts that others don t. Most actions in Almansur depend on variables and can, thus, cause these situations. Using the market buying example described before, a conflict is created if the amount of resource being bought makes its total cost lower the Land s gold such that other actions (such as facility upgrading) are no longer valid, yet if the buying amount was reduced, there would be enough resources left to still execute the other actions and thus, no conflict. Pop planning heuristics have a hard time estimating the distance from current state to the goal because, due to pop planning s delayed commitment on action ordering, it is very hard to determine exactly what is the plan s state at any given point. Almansur has very complex states with dozens of different variables, some of which are not directly comparable. This makes creating an heuristic hard even when knowing exactly the aspect of the current planning state, and much harder if that information would not be available Integrated Rule Based Systems In 2008, Josh MCCoy and Michael Mateas presented a multilayered, Integrated Agent for playing RTS games [25]. This agent divides the difficult problem of playing a full game match intelligently into the sub-problems of playing each of the game s individual facets intelligently on that match. Each of these facets is played by independent, specialized modules capable of individual action and teamwork. The combination of these modules results in an agent that mirrors the human player thinking process. Examples of these modules are the Strategy Manager and the Recon Manager. Each module is an expert system, composed of a large amount of rules based on expert human knowledge of the game. This knowledge allows, for example, the Strategy Manager to identify/predict an opponent s strategy by the way he built his Land so far and adjust its own strategy accordingly to defeat it. This system s Integrated Agent is built upon ABL [23], a reactive planning language. Modules communicate with each other through ABL s working

34 18 CHAPTER 2. RELATED WORK memory and dynamic subgoaling mechanism. This system s Integrated Agent was tested on Wargus, a game rendered by the open source RTS engine Stratagus, where it was tested against two hand scripted strategies, which serve as benchmarks. These results included an increase from 13% to 53% average win ratio against the champion script of a related system [29]. Wide use of recon information in order to adapt the agent s strategy to the opponent s using expert s game knowledge was the major deciding factor for creating the strong agent described in this system. The conclusion taken from this system is that several independent, specialized modules can play a game better than one all-purpose module that tries to control everything at the same time, especially for complex games where there are several independent factors that can affect the outcome. The authors of this system received three main benefits from assigning tasks to specialist agents: 1. The computational effort required is highly diminished, as it is no longer needed to consider the huge state space resulting from the combination of all facets of the game. Instead, there are several modules, each of which has a simple role such as deciding what to build or where to attack. 2. Considering each facet of the game separately is a natural, humanlike way to think on this genre of games, and allows certain modules to develop further than others as a response to increased needs for its capabilities in a particular genre/game. 3. Splitting facets into separate modules allows each module to provide specific help / assistance to players on facets they might dislike or not understand, thus opening new functionality. Therefore, although the aforementioned system is about AI for RTSGs and this work is about AI for MMOTBSGs, there are many similarities between both works problems and objectives. While MMOTBSG s do not have certain kinds of problems the system s authors had to solve, such as keeping the agent s army in formation during battle, this work s agent still has to do strategical thinking in order to get its armies in a favourable position, for example.

35 2.2. AI TECHNOLOGY Goal Based Systems Another system of interest is the Goal-Based Architecture for Opposing Player AI described by Kevin Dill and Denis Papp on [13]. This system presents a goal-based architecture for enemy players in two RTS games, Kohan2: Kings of War and Axis & Allies. The idea behind this architecture is to use a goal engine to decide on their artificial player s overall strategy and a reactive, fast processessing AI for the most time-limited decisions such as keeping armies in formation during battle or retreating damaged units. The goal engine has various possible goals, which correspond to high level game actions, such as ATTACK and DEFEND. Each of these goals has a priority that defines whether it is chosen to be executed or not by the goal engine. These priorities can be altered in order to allow for a certain amount of randomness and to discourage the AI from exhibiting erratic behaviour by constantly switching between two equally attractive, different goals. Processing the goal engine is computationally expensive, which is a problem in a RTS, but this is solved in the system by only activating the goal engine s think cycle every few minutes Egos Another technique introduced in the system is the notion of egos. These are essentially AI personalities that influence how the artificial players play: what strategies they prefer and what units and buildings they use among other factors. The egos are defined in a flexible, easily editable way, in order to allow players to create their own AI profiles for the game. A single AI profile requires at least three egos, one for early game, one for regular gameplay and one for emergencies, but each profile can have many more egos, each of which is triggered by changes in the game statistics. Each ego has its own preferences, construction templates and military templates, making them play very differently from each other. As stated in [13], Overall, our architecture was a tremendous success (it delivered a fun, challenging game experience) and received significant critical acclaim, which can be attributed to the flexibility of this system s architecture and to its goal engine and data-driven design that allowed gamers to develop their own AI profiles and developers to easily change the game AI.

36 20 CHAPTER 2. RELATED WORK The concept of egos ensures infinite AI possibilities by allowing dynamic tweaking of goal weights, for example allowing defense oriented goals to be favoured when under heavy attack by aggressive players. Unfortunately this system, like the last, also focused on RTS game AI, but with some adaptation the solutions found for this genre can easily be adapted into TBSGs and from there into MMOTBSGs. Also, in a turn based game one can dedicate considerably more processing time to a goal engine than in a RTS game, which favours deliberative [37] approaches (the goal engine) over the, computationally lighter, reactive ones (the reactive AI). This technique was used in some Civilization games, such as CivIV [18], where it provided players with the additional challenge of identifying their AI neighbours preferences in order to best fight against / with them. Egos also added replay value to the game by allowing the AI to behave differently from game to game Terrain Analysis One way to provide useful information to the agent for tactical purposes is to perform terrain analysis. These techniques analyze the map or zone where the game takes place and create a model of them that the agent can use, for example, to avoid fortified enemy positions or attack weak points. Although potentially powerful, terrain analysis is dificult to automate [15] because of computers difficulty with pattern matching. Humans can easily do terrain analysis, however, which makes it an interesting idea for agents trying to emulate human behaviour in strategy games Influence Maps A influence map is a representation of the game world made by terrain analysis where parts of the terrain (in Almansur s case, territories) are replaced by weights. These weights can represent various things, from areas controlled by powerful enemies to areas containing lots of natural resources. Influence maps are discussed in some detail in [31] and their particular use in Age of Empires is discussed in [30].

37 2.2. AI TECHNOLOGY 21 Figure 2.7: A Influence Map In figure 2.7 the areas around the triangles received a boost which could mean, if one assumes the triangles are friendly armies, the areas of the map with higher score are safer than areas with lower score due to being near to friendly armies. Influence maps have been used to create objectives for strategy game AI [26] and for determining the behaviour of AI players [20] [7]. These examples shows that the information obtained through the use of influence maps can be extremely varied and, thus, of use in all aspects of the game. Although influence maps are powerful tools they need to be kept updated constantly in a dynamic game such as Almansur. The larger the map being played, the more complex creating the influence map is as well. Influence maps are, thus, best used in combination with other specialized techniques (the planning techniques described before, for example) in order to provide more information when making terrain-dependant decisions, such as how to best move an specific army on a turn Waypoints Waypoints, as described in [35] represent a subset of terrain that can be accessed by players in a game map. The connections between waypoints represent viable movement routes. Information can be collected by the way waypoints are set, their connections and characteristics. They can be used to describe, for instance, location characteristics such terrain types, facility levels or army amounts.

38 22 CHAPTER 2. RELATED WORK Figure 2.8: A Waypoint Example The relationship between waypoints can also convey information such as how far one territory is away from another, and the way they are grouped can point out defenseless paths or warn of an incoming attack. One problem with waypoints, however, is that they must be coded into the game and thus, are not adequate to deal with dynamic situations such as moving armies or games with dynamic terrain. Thus, while waypoints are interesting, the influence map technique is more useful for the purpose of this work.

39 Chapter 3 Almansur Almansur is a browser MMOTBSG in a medieval-like world that combines empire-building, turn-based gameplay with the defining MMOG characteristic of allowing hundreds of players to play simultaneously [1]. At the start of each game, a player takes control of a Land 1 of a certain race/culture and tries to win that game against one or more players. Victory comes through the completion of several different objectives, which give victory points. In order to achieve these objectives, the players improve their territories by building / upgrading facilities on them, by recruiting / training armies to defend their Land and attack others, and by creating alliances between themselves through diplomacy. Figure 3.1: Part of a victory point report in Almansur 1 Political Entity / State controllable by a player 23

40 24 CHAPTER 3. ALMANSUR 3.1 Game Types Static game A static game is a game played on a map created beforehand. Thus, the map and its available Lands are always the same every game, with only the controlling players changing from game to game. This type of game requires all available Lands to be assigned to a player before starting, thus it can typically take a very long time to get the game started, at which point most players who subscribed first are inactive Dynamic game A dynamic game is a game played on a map which is created dynamically as players join and choose their Land basic characteristics (name and race/culture), and can thus vary a lot depending on the amount of players who join, their races/cultures and the time at which they join the game. Due to its dynamic nature, only a very small number of players is required for the game to start, with the rest of the players joining as the game goes on. For this reason, this type of game is the most popular in Almansur at the moment Historical game Figure 3.2: Historical scenario example A historical game on Almansur, as figure 3.2 suggests, is a game where the terrain and races/cultures are based off historical events. Historical games are a subcategory of static games.

41 3.2. TERRAIN Fantasy game A fantasy game on Almansur is a game where the terrain and races/cultures are based on fantasy worlds such as Tolkien s Lord of the Rings. Fantasy games can be played as both static and dynamic scenarios. 3.2 Terrain Terrain is extremely important in Almansur as it is where the game takes place. There are several different types of terrain, such as swamps, mountains, plains or forests. The type of a particular territory depends on the combination of three variables, which can have any real number value from 0 to 1, and determine the territory s charateristics. Terrain Characteristic Forestation Relief Swampness Description Forestation determines how dense a territory s forest is. Relief determines how rugged a territory is, high values will create increasingly rugged terrain Swampness determines how swampy a territory is, high values will create swamps. Table 3.1: Almansur Terrain Types High forestation values do not create a forest type in Almansur, instead the territory type is usually decided by its relief and swampness values and any territory can have high forestation if it is warranted. If a territory has low values of both relief and swampness it is considered a plain, which can, or not, have forests in it. The characteristics of a territory are extremely important, as different races/cultures and contingent types perform distinctly on different territory types, some facilities perform better in certain types of territories (for example, unforested plains are the best places to make farm facilities on, but the best stone/gold/iron mine locations are usually on mountains) and territory s characteristics determine the race/culture types that live there, which is essential for recruitment.

42 26 CHAPTER 3. ALMANSUR 3.3 Races/Cultures One of the defining characteristics of Almansur is having several different playable races/cultures, all of which are distinct and possess their own special abilities, advantages and disadvantages [3]. While different races/cultures (such as orc and elves) usually have very distinct terrain preferences, different cultures (such as christians and muwallads) have more subtle differences in their terrain preferences. Table 3.2 exemplifies the kind of differences that exist between races/cultures. Race / Culture Morale Ideal Florestation Ideal Relief Ideal Swampness Human Orc Uruk-hai Barbarian Elf Dwarf Table 3.2: Race/culture type example for Fantasy Games By analysing this table we can conclude for example, that orcs are the best fighters in swamps but perform poorly on forests while dwarves are excellent in high relief terrain such as mountains but perform poorly in swamps. Besides the territory suitability, each race/culture has more unique characteristics. They can have different contingent types with very different characteristics (only the orcish race can build uruk-hai hordes, for example), different buildings (only the orcish race can build warg dump facilities for example) and different base morale for its contingents as figure 3.2 showed. 3.4 Economy The economy facet is a part of all strategy games where players need to build their own facilities in order to receive resources and create armies. The more units/facilities/resources/technologies there are in a game, the more complex its economic system usually is.

43 3.4. ECONOMY 27 As shown in figure 3.3, a Land has gold and food expenses (mostly due to army upkeep costs) which must be covered by its production. This means a player must balance his total army with his production or these upkeep costs will surpass the production, leaving armies without food (which makes their loyalty drop, creating desertions) and the player at negative gold as the Land automatically loans money in order to pay its armies. Figure 3.3: The economic tab in Almansur There are several sources of gold in Almansur, all of which can be seen in detail under revenues in the income statement presented in figure 3.3. Leader is the base source of income of the Land, Taxes is the income from taxation over the Land s territory population, Tributs is the income from receiving gold as a tribute from other players, Gold Mines is the income from the combined production of the Land s gold mines and Market is the gold earned from selling other resources at the market. There are also several ways to spend gold in Almansur, all of which are also present under costs in the income statement presented in figure 3.3. Tributs is the amount of gold spent paying tribute, Units is the gold upkeep of the Land s armies, Market is the cost of the resources bought at the market with gold, Construction is the gold cost of the facilities that were built/upgraded and Recruitment is the combined gold cost of the recruited armies Resources Table 3.3 explains the several different types of resources that currently exist on Almansur, each with its distinct purpose. These can typically be obtained by upgrading their respective

44 28 CHAPTER 3. ALMANSUR resource producing facilities (farms produce food while gold mines produce gold, for example). Depending on the target territory s suitability (a real number between 0 and 1) for the resource type considered, the facility production can be higher or lower. Resources can also be obtained by downgrading existing facilities. Resources are required for upgrading facilities, recruiting and maintaining (through the Land s upkeep) existing armies, which makes their management an essential, complex part of the game. As they are used to pay upkeep costs for armies, gold and food are the most important resources in Almansur. Resource Type Gold(g) Food(f) Iron(i) Stone(s) Wood(w) Horses(h) Wargs(w) Description Gold is used for building/recruiting, paying upkeep costs and is obtained by gold mines and territory taxes. Food is used for recruiting, paying upkeep costs and is obtained by farms. Iron is used mostly for recruiting and is obtained by iron mines. Stone is used in small quantities for building and is obtained by stone mines. Wood is used mostly for building and is obtained by lumber mills. Horses are used for recruiting non-orcish cavalry and are obtained by stables. Wargs are used for recruiting orcish cavalry and are obtained by warg dumps. Table 3.3: Almansur Resource Types Market All resources in Almansur can be obtained through the market. Gold is obtained by selling the other resources while the other resources are obtained by buying them with gold. There is a limit on how much resources can be traded on each turn which depends on how many resources were traded in the past (repeated trades of a particular resource increase the amount

45 3.5. MILITARY 29 available to sell / buy of that resource on later turns) and a tax, usually 5% of the resources a player wants to trade. Figure 3.4: Market example Using the market is very important as it is often the case that a Land has excess resources of certain types while needing other types, thus the market plays a essential role in Almansur. 3.5 Military Last but not least, Military exists in any strategy game where there is the notion of armies and their composing units. Figure 3.5: Almansur s military map example As presented in figure 3.5, the military map in Almansur shows the visible armies, with their colors being the same as the colors of their owner Land s relationship with the player. Thus

46 30 CHAPTER 3. ALMANSUR red flags represent enemy armies while dark green flags represent the player s own armies. Arrows represent movements of the army to other territories. Conquer orders are represented as red arrows while other orders are represented as green arrows. It should also be noted that the type of terrain can be usually identified directly by how it looks like in a map: white mountains are the roughest, while light brown territories represent simple hills, the more trees a territory has inside the more forested it is and grey territories are swamps. Fortress facilities and cities are also represented in the map, as a circular wall and as a small house respectively Contingents In order to wage war in Almansur, armies are required. These are composed by contingents, which represent a certain amount of a particular contingent (troop) type. Each of Almansur s races/cultures has its own contingent types, which have different strenghts and weaknesses. An single army can have a lot of contingents of several different types, which are recruited with resources from territories with recruitable population of the Land s race/culture. When recruiting contingents, modifiers to their base statistics are applied to them depending on the level of the Ironworks and Recruitment Center facilities of the target territory. Each Ironworks level increases the base armour of the recruited contingent by 10%, while higher Recruitment Center levels allow for adjacent territories to the target s to provide recruitable population and for recruited contingents to be recruited with some initial combat experience. If the territory has no Ironworks facility, only a very limited amount of contingents can be recruited, likewise if there is no population of the Land s race/culture on a territory, only militia of whatever race/culture the territory has can be recruited. Militia is a special contingent type, which is considerably cheaper and weaker than regular, but has the advantage of being able to be recruited from any race/culture. Figure 3.4 shows the contingent type details for the Elvish race: their upkeep in gold and food per month, their base armour, their range, morale, required ironworks level, their damage on

47 3.5. MILITARY 31 each combat phase and their recruitment cost. Elf Contingents Archer Swordsman Horse Archer Upkeep 1g/1f/month 1g/1f/month 3g/2f/month Base Armour Range Max.Train.Exp Req.Ironworks level 0 level 1 level 1 Ranged Damage Shock Damage Melee Damage Recruitment Cost 3g/2i/2w/1f 3g/2i/1w/1f 8g/4i/2w/2h/2f Table 3.4: Elvish contingent type characteristics Explanation of the most important characteristics of these contingents is presented on table 3.5. Cont.Attribute Race/culture Type Size Armour Experience Status Description Race/culture the contingent belongs to. Type of weapons/tatics used by the contingent. Size of the contingent in number of soldiers. How protected the contingent is. How experienced the contingent is. How fit the contingent is. Table 3.5: Major contingent attributes and their purpose Experience gains and their effects aren t equal for all contingent types, specialized army types such as elf horse archers can train to higher levels of experience and gain more benefit from high experience values than elf militia. This also means, however, that zero experience elf horse archers are useless while elf militia performs at nearly its best potential even without experience. At 0 status a contingent operates at only 50% efficiency. Status is lost when the army is moving and fighting and recovered when it is resting.

48 32 CHAPTER 3. ALMANSUR Battle Phases As detailed in table 3.6, combat in Almansur is processed in several different phases: ranged, charge, shock, melee and pursuit. Battle Phase Ranged Charge Shock Melee Pursuit Description Armies shoot their long-range weapons at each other. Armies shoot their long/short-ranged weapons at each other while running in. Armies clash at full speed against each other. Armies fight each other at close range. Defeated army flees the battle, victor pursues and does further casualities. Table 3.6: Battle phases The contingent characteristics presented in the previous section influence how the contingent performs in battle: elf archers for example are incredibly powerful at range but useless once the enemy closes in, while elf swordsman are incredibly powerful in the shock and melee phases but have no ranged attacks Orders As detailed in table 3.7, armies in Almansur are controlled by means of several different orders: conquer, battle, rest, train and garrison. Order Conquer Battle Rest Train Garrison Description Army conquers the target territory, being vulnerable to attack. Army stands in battle formation, prepared for enemy attacks. Army rests, recovering status but being vulnerable to attack. Army trains, gaining experience but being vulnerable to attack. Army defends the Land s fortress in the target territory. Table 3.7: Almansur Orders

49 3.5. MILITARY 33 Each of these orders has a different purpose. While only the conquer order allows a Land to gain additional territories, the process of conquest lowers the status of the army and leaves it unprepared for battle if an enemy army enters the territory being conquered during the turn, while the battle order increases the army s status slightly and gives significant battle bonuses but doesn t conquer the territory. Further details on each of these orders can be found at [4] Speeds When moving from territory to territory it is also required to set the speed at which the army should move. The various speeds in Almansur are explained in table 3.8. Speed Slow Cautious Normal Forced Panic Description Army moves slowly in battle formation, ready to attack. Army moves cautiously, being slightly slower than normal but less vulnerable to attack. Army moves at its normal pace, being slightly vulnerable to attack. Army moves as fast as possible, but is extremely vulnerable to attack. Default speed for the army when fleeing a lost battle, can t be set by the player. Table 3.8: Almansur Speeds Ordering an army to move at higher (normal/forced) speeds allows it to arrive faster at its destination, but it also decreases the army s battle power and degrades its status faster than moving it at slower speeds (slow/cautious). Further details on each of these speeds can be found at [5] Combat Figure 3.6 is an in-game screenshot of the initial phase of combat, at the top there is a description of which players fought, where and the type of terrain they fought in. The summary of both armies who fought is presented afterwards, with important characteristics such as their initial morale, ownership and type in focus - cavalry contingents give extra combat bonuses to the army that has more of them on the field, on plain territories. There is also other information present, such as how many points each side won/lost due to the bat-

50 34 CHAPTER 3. ALMANSUR tle and how much overall efficiency the army has (extremely large armies incur an efficiency penalty due to the command limits each race/culture has). Figure 3.6: Almansur Battle example Initial morale depends on the army s order, speed (if moving), composition (type of contingents present in the army) and status at the start of the battle. Morale is the most important single factor in battles as armies take morale damage as well as physical damage and when the morale of an army hits zero or under, they lose the battle and start retreating, allowing the enemy to pursuit and cause additional casualties. The amount of these casualties depends on the morale remaining for the victorious army and its composition - cavalry armies will cause more casualties when pursuing broken (defeated, morale at zero or under) armies on plain terrain and lose less of their numbers when fleeing from a losing battle. Finally, at the bottom of the screenshot, one can find the army composition details, with the contingents who fought and their characteristics (type, armour, experience, status and size). This information summarizes the conditions of the battle.

51 3.6. DIPLOMACY 35 The army composition is also very important, as a battle has several phases and is won when the enemy army breaks, having an army without ranged capabilities can lead to its morale and numbers being severely thinned out by the time the shock and melee phases arrive, and having an full-ranged army can lead to a quick defeat once the ranged phases are over. The presented military concepts and their relationships are summarized on figure 3.7. Figure 3.7: Almansur s military conceptual map This conceptual map presents the various factors that influence an army s effectiveness in battle. Army factors represent operational details that directly affect an battle s outcome: inherent army characteristics such as its composition, status, training and player choices like its speed and orders. The other factors represent some of the player s Land inherent characteristics such as its race/culture, contingent types available, command limits and the terrain that the battle takes place on, which heavily influences and is in turn influenced by the army composition the player chooses to use (players who use a lot of cavalry in their armies attempt to conduct battles in plain territory for maximum cavalry bonus, and a player may choose to use a lot of cavalry on his army because there are a lot of plain territories in the map, for example). Much of the strategy required to play Almansur comes from the decisions of which orders to give to which armies with which speeds, as even a small army can win against a large one if the small one is in battle in a territory and the large one tries to move in forced speed into this territory. Considering all the many different types of orders/speeds, territories and army compositions in Almansur, there is an abundance of possible battle situations, which makes it hard to predict how to optimally form and move a Land s armies in all situations. 3.6 Diplomacy Diplomacy exists in all strategy games where several human players are expected to compete against each other in a free-for-all fashion. Diplomacy is particularly important when there are no pre-established teams or rules defining whom each player can or cannot ally. Its im-

52 36 CHAPTER 3. ALMANSUR portance also increases as the number of active players in a single game increases. Figure 3.8: A diplomatic map in Almansur As figure 3.8 displays, the diplomatic view of the map in Almansur has several diferent colors indicating different diplomatic relationships. In this particular case, there are two enemy players to the south, one friendly player to the west and one neutral player in the east. Dip.Relation Color Description War Red Both Lands can move their armies into each other s territories and conquer them as well as fight with them. Peace Grey The default relationship, neither Land can move its armies into the other s borders or share any information. Friendship Blue Both Lands can opt to share information such as their map vision and move armies into each other s territories. Alliance Green Same as friendship, but both Lands are part of the same alliance. Table 3.9: Diplomatic Relationships The diplomatic relationships detailed in table 3.9 are ordered from the lowest possible relationship to the highest one. In order to change the relationship with other Lands, players must send them messages through the diplomatic message tab. These messages can contain proposals, which are automatically accepted in the case of downgrading relationships but need to be accepted manually by the receiving player in the case of upgrading relationships. Relationships can only be downgraded one level each turn. That means, if a player wants to

53 3.6. DIPLOMACY 37 declare war on another player that he had a friendship relationship with for example, he can only downgrade the relationship one level to peace in that turn. In the following turn if he wishes to do so he can downgrade the relationship again, from peace to war. An example of the interface used to send and receive messages is shown on figure 3.9 below: Figure 3.9: The diplomatic messages tab Although a player does not require the use of diplomacy to play well or even win in Almansur, the different capabilities of each race/culture allow for interesting game politics - a orc player may team up with an elf in order to get past heavy forest zones where orcs are at their weakest in battle, or he may instead team up with an dwarf in order to trade gold, which orcs usually have in abundance, for iron, which dwarfs usually have in abundance. Thus, it is to the advantage of players to establish solid diplomatic relationships, lest they be eventually overwhelmed by enemies.

54 38 CHAPTER 3. ALMANSUR

55 Chapter 4 Almansur AI Agents 4.1 Almansur Conceptual Model Figure 4.1 displays Almansur s conceptual map, created during the research part of this work by playing Almansur and analysing its core characteristics, concepts and their interactions. As the military command conceptual map was already displayed in the previous chapter, this particular map focuses on the resource oriented economic and military concepts. Figure 4.1: Almansur s conceptual map There are two main types of concept that this map displays: Action concepts represent actions that the player can execute in the game (a player 39

56 40 CHAPTER 4. ALMANSUR AI AGENTS has the option of upgrading his facilities, exchange resources at the market or recruit contingents, for example). Object concepts represent game entities/properties that may change automatically over time due to processes (such as the population growth every turn, for example) or due to player actions (facilities increasing their resource production over time due to players upgrading them, for example). The concepts illustrated in the map have several different types of relationship between them as the legend detailed: Process links connect concepts related by means of an specific in-game process (the way population creates more population over time by the game s reproduction process, for example). Influence links connect concepts that influence processes and actions (the way the tax rate affects how much the population pays in taxes every turn, for example). Property links connect concepts that are properties of other concepts (the way both population and contingents possess loyalty as one of their properties, for example). Contains links connect concepts that contain instances of other concepts inside each of their instances (the way a territory contains several types and amounts of population and facilities, for example. 4.2 Almansur Agent Architecture As seen in figure 4.2, the AI was implemented using a multi-agent architecture with three distinct agents: the strategy, economy and military agents. The facet decomposition used was the product of previous chapter s expert human player analysis and this type of architecture was selected in order to divide and conquer the problem of creating AI for such a complex game as Almansur. This architecture presents advantages and disadvantages relative to the alternative option of using a single agent to manage all game facets. The predicted main advantage of using a single agent that integrates all facets is the perfect coordination achieved between these. The

57 4.2. ALMANSUR AGENT ARCHITECTURE 41 predicted main disadvantage of the single agent approach is the complexity of planning for every facet simultaneously. Figure 4.2: The AI Player s Architecture After some testing, the disadvantage presented by the increased planning complexity proved to be too great as even the separate economic/military planning done by the specialist agents was complex and time consuming, with the finished economic agent taking around 2 seconds to get adequate solutions from its planning, which is a lot when considering there can be thousands or more agents running simultaneously on a single machine due to Almansur s MMO nature. On the other hand, the problem of coordinating the different agent s efforts was manageable with the addition of another specialist agent which focuses on analysing the game s state and creating high level strategic decisions that guide the economic and military agents into cooperating with each other. To interact with the game engine, a specific module named AIController was created. This module is responsible for collecting the information the agents will need to plan, which is stored in their specific state, and for converting agent s plans into game actions. It is important to note that the AI is not an module included in the game system, but a totally separate, independent system. In order to interact with the game servers, the AI has to log in like a regular human player using a username and password on the official game website [1], save the required session data and access any Lands that are being played at the moment by that username, using the exact same interface as human players.

58 42 CHAPTER 4. ALMANSUR AI AGENTS Http requests, both GETs and POSTs are extensively used in the process of logging in, retrieving the required information from the server and sending the action information for the turn directly into the game server. In order for each agent to play, while being as dynamic and resilient to game changes as possible, a custom search-based planning system [32] was created, which uses agent-specific goals and operators as well as the state information to plan out the best perceived course of action for the turn. 4.3 Strategy Agent The strategy agent is responsible for the AI s high level strategic decisions, which take the form of goals for the other agents [13]. This agent s state contains information about the Land s map territories, about the Land s resource production levels and about whether the AI s Land is at war with other players or not and uses this information to decide on which goals to set for the Military and Economic agents. Economic goals can be of two kinds, resource goals and production goals. 1. Resource goals are achieved when the Land reaches a certain amount of specified resource type. 2. Production goals require that the Land s production per turn of the specified resource type reaches a certain amount. Military goals can be of three kinds, army size goals, facility goals and conquer goals. 1. Army size goals are achieved when the Land s armies cost a certain percentage of the Land s income to maintain and the Land s fortresses have a certain amount of troops guarding them. 2. Facility goals are achieved when the Land upgrades its secure (that is, in a territory with fortress) military facilities up to a certain level. 3. Conquer goals are achieved when the Land conquers a certain territory.

59 4.3. STRATEGY AGENT 43 The military agent requires a list of armies and a list of territories in order to plan conquests. The list of armies consists of all armies with size about a certain threshold (adjustable parameter called retreat threshold) who are not currently in garrison order and who have status above 40%. The list of territories consists of an certain amount (adjustable parameter called conquers per turn) of the closest territories to the Land s capital that are eligible for conquest (either enemy territories or uncontrolled territories). As the strategy agent was not the focus of this work, but rather a way to create simple goals for the economic and military agents to use in their planning, it was implemented partially by several simple decision trees following interviews with expert human players, each of which defines goals for a specific part of gameplay covered by the AI Economic Decision Tree The economic decision tree creates goals for balancing the Land s resources. Each resource has a threshold amount (adjustable parameters called resource goal parameters) which dictates how much of that resource the Land should have in order to function optimally. Figure 4.3: The decision tree for defining economic goals As figure 4.3 shows, when one or more Land resources goes under the threshold amount, an resource goal is created to try and get it at the threshold amount again. If the resources missing are upkeep resources (gold and food), an production goal is created as well. This is done in order to help the Land be able to support a larger upkeep in the future.

60 44 CHAPTER 4. ALMANSUR AI AGENTS Military Decision Tree Military goals only depend on the agent s Land war state. If the agent s Land is at war, military power is maximized through a goal that attempts to upgrade the Land s main military facilities to high levels (at least one ironworks level 3 and one recruitment center level 2), through a goal that attempts to recruit safe garrisons to the Land s existing fortresses (the exact percentage of the garrison s capacity that is filled depends on the influence the Land exerts over the garrisoned territory but is generally high, especially in border territories, for instance) and through a goal that attempts to recruit above the Land s production levels, as shown in figure 4.4. Figure 4.4: The decision tree for defining military goals If at peace, military throughput is kept as a low priority through a goal that attempts to upgrade the Land s main military facilities to low levels (at least one ironworks level 2 and one recruitment center level 1), through a goal that attempts to recruit token garrisons to the Land s existing fortresses (the exact percentage of the garrison s capacity that is filled depends on the influence the Land exerts over the garrisoned territory but is generally low, especially at the Land s capital) and through a goal that attempts to recruit in order to match the Land s upkeep with its production. 4.4 Economy Agent The economy agent is responsible for planning how to attain the economic goals. This agent s state contains the AI s Land economic details as well as some general information about the agent s Land. Using this information and the economic goals for the turn this agent is able

61 4.4. ECONOMY AGENT 45 to determine how to best improve the Land s current economic situation. The core of the economy agent is its planning system. The initial node of the economic planning system is the state of the current turn, which contains the data presented in table 4.1. Data Type Current Turn Land Resources Market Resources Facility Types Upkeep Production Future Production Facilities Description What turn it currently is in the game. How many resources of each type are available to the Land this turn. What amount/price of resources of each type are available to sell/buy at the market this turn. What facility types are available for the Land s race and their details. The amount of gold and food being spent for Land upkeep this turn. How many resources were produced by the Land s facilities this turn. The amount of resources which will be produced when all of the Land s facilities complete their ongoing upgrades. The amount of facilities of each type the Land has and their details. Table 4.1: Economic Agent State Data Economic Planning Action Type Buy(res type,amount) Sell(res type,amount) Upgrade(fac id) Pass Turn() Description This action simulates the effect of buying a certain amount of a particular resource type from the market. This action simulates the effect of selling a certain amount of a particular resource type to the market. This action simulates the effect of upgrading a certain facility to its next level. This action simulates the effect of passing a turn, updating the state accordingly. Table 4.2: Economic Agent Action Types To create successors for the search, the AI applies several operators (described in table 4.2) to

62 46 CHAPTER 4. ALMANSUR AI AGENTS this state, which simulate the outcome of several possible economical actions. These actions are searched through by the A* algorithm, using custom heuristic and cost functions, as well as a state evaluation function to decide what are the best states when these are tied in both cost and heuristic. Since all of the operators described above have arguments, some of which have near continuous ranges such as the market related ones, the total amount of possible operators that can be applied each turn is in the range of tens of thousands. In order to reach a compromise between playing quality and software performance, finding a way to prune the search graph and guide the planner s search was a requirement. The market operators arguments were restricted to very limited ranges and redundant operator sequences were restricted. This means that a buy operation for food in a turn, when there is food available and the Land can afford to buy it all, will only consider buying as much food as possible. Thus, instead of the turn having buy operators (one for each possible parametrization for buy), only the operator buying the full food will be considered. It also means that after buying a certain resource, the Land can t sell that same resource in the same turn and the other way around, as those are redundant operator sequences Cost As cost, the turn number works well since faster is better in Almansur when all goals are achieved: c(s) = t(s) t(s 0 ) (4.1) t(s) is the planning state s turn number. t(s 0 ) is the initial state s turn number Heuristic The heuristic function used by this agent to evaluate how far a certain state s is from a certain goal is slightly different for production goals prdg and resource goals rg. The production goal

63 4.4. ECONOMY AGENT 47 heuristic formula is: h(prdg, s) = (op rd(r) fp rd(s, r)) prcm(s, r) (4.2) r prdg r represents the resource for which the agent has a production goal prdg. oprd(r) is the objective production amount of resource set by the production goal prdg. fprd(s,r) is state s s future production of resource - that is, the production the AI s Land will have when all upgrading facilities complete. prcm(s,r) is state s s market price of resource for the AI s Land. The resource goal heuristic formula is: h(rg, s) = r rg or(r) cr(s, r) fp rd(s, r) r represents the resource for which the agent has a resource goal rg. prcm(s, r) (4.3) or(r) is the objective resource amount set by the resource goal rg. cr(s,r) is the current amount of resource for the AI s Land. The total heuristic formula is: h(prdg, rg, s) = h(prdg, s) + h(rg, s) (4.4) Using these heuristic functions, nodes with smaller total heuristic values are preferred. One important thing to note which is not represented in the equations is that if the result of op rd(r) fp rd(r) or or(r) cr(r) is negative, it is set it to 0 instead in order to avoid giving value to overcompleting a goal, which was proven by testing to cause the planner to prefer overcompleting single goals rather than completing all goals equally State Evaluation Function Heuristic and cost work well together for guiding the economic agent to its current goals. However, there are typically many different ways to reach these goals, resulting in different final states - a final state might be reached which has more overall resources than another, yet both reached the proposed goals on the same turn, for instance. In these situations, the

64 48 CHAPTER 4. ALMANSUR AI AGENTS state evaluation function seval described below is used to decide what state is, overall, the best: seval(g, s) = (a(s) + prda(s) prdmul) prcm(s, r) gc(g) (4.5) resource resource represents each of the resource types available to the AI s Land. a(s) is the amount of resource in the Land s coffers for the state s turn. prda(s) is the amount of resource produced every turn by the Land for the state s turn. prdmul is an adjustable parameter called production multiplier which serves to increase the value of production values as they tend to be smaller than the amount of the resource itself in the coffers, yet much more valuable as the Land earns the produced resources every turn. prcm(s) is the state s market price of resource for the AI s Land gc(g) is an adjustable parameter called goal constant, which represents how intensely the Land currently requires a particular resource type. Its value depends on the Land s goals: if there is a goal for resource its value is 5, if there is no goal for resource its value is 1. These values were throughly tested in order to ensure that goal resources have priority, but not so much priority that all other resources would be recklessly sacrificed in order to obtain them. Having a bigger range of values for this variable would be useful in order to create a better balance when there are several competing resource goals with different priorities but this wasn t implemented due to lack of time to properly test its implications. When completing all economic goals takes too long (more than 2 seconds), the planner returns the best plan found during the search. Testing indicated that usually this plan has actions that extend over the current turn, and is thus a complete plan for the current turn already, even if not the optimal one. Each turn the economic agent creates a new plan, as in most cases the economic situation changes significantively as territories are lost / conquered and it would be more complicated to adjust the old plan than to create a new one, based on the current turn.

65 4.5. MILITARY AGENT Military Agent The military agent is responsible for the details of war and solving the military goals. It is composed of three main modules as shown in figure 4.5: the military facility module, military recruitment module and military command module which solve different specialized subgoals. Figure 4.5: The Military Agent s architecture This agent s state, described in table 4.3 below, contains the AI s Land military details as well as some general information about the agent s Land. Data Type Current Turn Land Resources Upkeep Income Events Units Mil.Facilities Territories Description What turn it currently is in the game. How many resources of each type are available to the Land this turn. The amount of gold and food being spent for Land upkeep this turn. The amount of gold and food earned this turn by the Land s. The currently active army events and their details. The Land s armies and their details. The Land s military facilities and their details. The territories of the map visible to the Land and their details. Table 4.3: Military Agent State Data The units data type contains data on which armies are on the map, where they are, how much experience they have, what kind of troops they have and how much status they have. The events data type contains data on the current orders of the armies and their details: which type of order, to which territory the army is moving, in which speed it is moving and

66 50 CHAPTER 4. ALMANSUR AI AGENTS in how many days the order is expected to be finished. The territories data type contains data on the territories known by the Land, which entity owns them and their diplomatic relationship with the Land, how much recruitable population (draftables) and general population they have, what is their terrain type and how much military influence the Land has over the territory Influence Maps The territories military influence is obtained by adding the influence from several factors: Unit Influence is obtained by adding the values of the units in the map. Enemy units add negative influence while friendly units add positive influence. The amount of influence added depends on the size of the unit - larger units add more influence. Unit influence is also added to adjacent territories, but reduced 50% from its original value and does not spread any further. Fog Influence is negative influence added to a territory under the fog of war, as the Land can t see what is going on there. Fog Influence, like Unit Influence, is also spread to adjacent territories reduced 50% from its original value and does not spread any further. Dip.Relation Influence is either a positive fixed amount (in case the territory owner is friendly) or a negative fixed amount (in case the territory owner is an enemy). Mil.Facility Influence is obtained by multiplying a fixed amount by the level of the ironworks, recruitment center and fortress facilities on a given territory, positive if the territory is friendly, negative if the territory is controlled by an enemy. After being filled, the influence map gives a quick overview of the field when tactical decisions are required [30]. One such decision is deciding how much to recruit for garrisons. If a garrison is on a territory with high positive influence, it means that the territory is safe, so the garrison can be a smaller one. On the other hand if the garrison is on a negative influence territory, it means the territory is unsafe, thus requiring a large garrison.

67 4.5. MILITARY AGENT Military Decisions The initial node of the military planning system is the state of the current turn, similar to how the economic agent works. As referred to before, the military agent was split into three modules in order to both simplify the military agent s planning, which would be quite complex otherwise and because it was a decision which made sense, as each of these three sub-tasks are mostly independent from each other and, thus, could easily be split without harming the AI s play quality Military Facility Module This module is responsible for building a military infrastructure for the Land using the military goals produced by the strategy agent. In order to do this, this module first considers where to upgrade the Land s military facilities. These are only built / upgraded at territories with a fortress in them in order to ensure their safety from enemy attacks. If there are no fortresses available, one is upgraded on one of the Land s territories. The decision process for upgrading the military facilities is simple: if the ironworks is below the required level and can be upgraded (resources are available and the facility is not in the process of being upgraded already), upgrade it, then do the same for the recruitment center Military Recruitment Module This module is responsible for using the AI s resources to recruit military units in the form of balanced armies to attack the AI s enemies. It is also responsible for dynamically filling fortress garrisons using the strategy agent s goals and the influence maps. Thus, garrisons in the Land s protected territories (high influence from the Land s units, territories and well garrisoned fortresses nearby) will always be smaller than garrisons in the Land s exposed territories (low influence from the Land, which means at least some of the units and territories nearby are hostile and the Land s fortresses in the zone are not fully garrisoned yet).

68 52 CHAPTER 4. ALMANSUR AI AGENTS Armies are preferably recruited at the territory with best military facilities as they get 10% armor bonus for each ironworks level on the territory and extra initial experience depending on the recruitment center level. Recruitment requires resources, proportional to which and how contingents of each type are being recruited. The recruited contingents also require a continuous payment in gold and food every turn to be maintained, which is the Land s upkeep. The decision to recruit and how many contingents to recruit is done following a ruleset: Armies should not be recruited on a territory without ironworks unless it is for a garrison - most contingent types can only be recruited when the Land has a ironworks of level one or above thus it is best to save resources for recruiting better armies unless the recruitment is urgent as in the case of garrisons. Armies should not be recruited for garrisons on the first two turns of the game - there are usually no immediate threats to a garrison on the start of a game, yet the influence on the starting territory of dynamic games is usually low as there is no ironworks / recruitment center built yet on the territory. This lead the AI to spend a lot of early resources on useless garrison armies before this rule was made, especially when playing races that start with low garrisons (elves start with garrisons of 750 size for a fortress that can contain up to 2500 troops, for example). Non-garrison armies should not be recruited if the territory they will be recruited on has enemy armies on it unless they outnumber these severely - otherwise the recruited army will be destroyed immediatly by the enemy army upon recruit. Tiny armies (under 200 total number) shouldn t be recruited - they are too small to do anything relevant in the game. Armies are recruited with draftables and resources, so the recruitment can t spend more of those than there are available. Recruited armies cost gold and food to maintain, if the recruited army will put the Land at a deficit of these (that is, make its upkeep exceed production) then it may not always be good to recruit. The threshold that decides how much to recruit, upkeep wise, at

69 4.5. MILITARY AGENT 53 each point is an adjustable parameter called upkeep goal, which will be described in detail later. Cavalry armies have distinctive characteristics (much faster than infantry in plain territory, much slower in mountains, for instance) which would need special treatment. Due to lack of time the special rules to control them couldn t be implemented which means cavalry armies are handled by infantry rules and thus, there is no incentive to recruiting them as they will be poorly used by the AI. Almansur is a very dynamic game with 10+ distinct races where both content and balance changes are done often. This characteristic makes dynamic solutions ideal for the game as they automatically adapt the AI to the constant changes in the game. Recruiting equal amounts of all available contingents always creates a perfectly balanced army which might not always be the optimal army available but is usually good enough to do its job. As armies are not worth much without training, their initial order is always to train, which usually makes them adept enough to be used in combat the following turn. Also, it is worth to mention that when recruiting both non-garrison and garrison armies, these take priority over the non-garrison armies, as they are usually required to defend important locations from enemies while non-garrison armies take a more offensive role (it is more important to make a garrison for a fortress without one than to train 1000 men for an offensive army, for instance). Figure 4.6: Elf recruitment options example Armies are recruited by building an equal amount of every available infantry contingent type

70 54 CHAPTER 4. ALMANSUR AI AGENTS while keeping the draftables / resource limitations in mind (for example, elvish armies will be composed of 50% archers and 50% swordmens since those are the two available infantry contingents, as could be seen in figure 4.6) Military Command Module This module is responsible for directly controlling the existing trained armies. A trained army is defined as an army with at least one medal and a half of experience as seen in figure 4.7. Figure 4.7: Example of an contingent with 1.5 experience Some rules were also implemented on the military command module, which help the AI cope with specific situations: Armies with experience under a certain threshold (one medal and a half) are ordered to train for the turn - this ensures they will become trained armies, as most armies perform badly unless they have at least one medal and a half experience. Armies with status under a certain threshold (20%) are exempt from being given orders for the turn in order to rest up to full status - armies perform up to 50% of their original statistics when being out of status, keeping them at full status is a good practice. Different armies standing in the same territory with compatible orders (that is, neither is a garrison and neither is currently in the middle of moving to another territory) are merged - as the game progresses the small armies of the game s beggining start to become obselete and lose a lot of members due to battles and other factors. Yet, they are still very useful due to their high experience values, and thus should be joined with newly recruited armies. Armies with size under a certain threshold (retreat threshold parameter) should retreat to the Land s main recruit point - if they retreat to a place where there are other armies they can merge and become useful again. If a territory with a garrison has an enemy army on it which is smaller or equal to the garrison in numbers, the garrison may, 34% of the time, exit the fortress in battle

71 4.5. MILITARY AGENT 55 order - thus expelling the enemy from the territory in the ensuing battle and allowing recruitment of a new, fresh, garrison for the territory while the ex-garrison army is free to launch a counter-attack. The most common situation, however, is having to command armies to move around the game map, battling other armies and conquering territories. In order to determine which armies should conquer which territories and in what order, this module uses A* powered planning with the help of a scheduling system. This system starts by roughly estimating how much time - in game days, not turns, as most armies can conquer several territories on a single turn - a given army will take to conquer a given territory. This estimative is not trivial to create by any means: the AI starts by doing a standart A* pathfinding algorithm to determine how long it will take the army to reach the territory. The heuristic for this algorithm is the distance to the target territory if all territories in a straight line to the territory were crossable and ideal to the race (thus only costing 5 days to cross per territory) which is an optimistic heuristic as required. The cost function of the algorithm is the estimated amount in days it takes to cross from one territory to another: always at least 5 days, but it may cost more when crossing non-plain territory such as mountains, forests and swamps. Some fantasy races have an easier time crossing certain types of terrain as was shown in the previous chapter, in table 3.2 on page 26. Thus when deciding the penalty incurred for crossing an unfavourable terrain, the terrain s forestation/relief/swampness values are deducted from the race s ideal values for those characteristics. For example since the Elvish race has an ideal forestation value of , even when it crosses the most heavily forested terrain with 1.0 forestation value it receives no penalty (race value of terrain value of 1.0 = 0.0 = no penalty). However, Humans ( ideal forestation) would be heavily penalized by crossing a 1.0 forestation terrain as = -0.4 which is the threshold for the maximum penalty of 10 extra days to the territory crossing.

72 56 CHAPTER 4. ALMANSUR AI AGENTS If humans would cross a 0.8 forestation terrain then they would hit the threshold for the minimum penalty of 4 extra crossing days as race value of terrain value of 0.8 = -0.2, which is the threshold for minimum penalty. Finally, if they would cross a 0.6 or lesser forestation terrain they would incurr no penalty. Using this estimated crossing time per territory as a cost and the optimistic straight line estimate as an heuristic the A* algorithm returns the total estimated cost of reaching the objective territory in the most efficient fashion. Having calculated the cost of reaching target territory it is still required that the AI is able to calculate the amount of time it will take to actually conquer it. This is done by means of a simple calculation - citing the adequate manual page [2]: The number of days needed to conquer a territory without a fortress is equal to the total population of the territory divided be the army/armies size, thus if the territory has, for example, 100 population and the Land has an army of 10 conquering it, it will take roughly 10 days for the territory to be conquered. The military planner uses two operators: 1. The Conquer Territory operator represents sending an army to conquer a certain territory, which creates a scheduled event containing the army details, the target territory for conquest and the amount of days estimated to reach and conquer it. 2. The Wait Event operator, much like the pass turn operator on the economic planner, advances time up to the closest scheduled event and removes it and all events which end at the same time from the state, creating a new, updated state where at least one army has no further orders as it has just completed its previous ones. The initial state of this module s planning is the current military state (described at table 4.3 at page 49). The goals used for this planning system are provided by the strategy agent and consist in a list of conquer goals for enemy or uncontrolled territories. The strategy agent also provides the list of which armies are available to be used in the planning. The two operators, conquer territory and wait event are used to generate successors from this initial state, and A* is used to search for a state that accomplishes all conquer goals.

73 4.5. MILITARY AGENT 57 The cost function evaluates the cost of reaching the current planning state s and consists in the amount of days spent since the beggining of this planning. This number has been tested to work well since it encourages exploring army / territory conquering assignments in order to find the fastest option. c(s) = d(s) (4.6) d(s) is the planning state s day number. The heuristic function evaluates how far a certain state s is from its goals g by estimating the number of days remaining for all conquers to be done. This is done by multiplying the amount of unconquered goal territories by a constant that was tested to represent an optimistic conquer time for the average army to conquer an average territory. h(g, s) = grem(g, s) optimisticconquert ime (4.7) grem represents how many uncompleted goals remain in the current state. optimisticconquertime is an adjustable parameter which represents an optimistic view of the amount of time that an average army takes to conquer an average territory. The objective of this planning system is that armies are constantly active until all goals are accomplished and that the better army for the job is the one assigned to each goal territory. Figure 4.8 exemplifies how the operators work in combination to create conquest plans for the Land s armies: Figure 4.8: AI Conquest Planning Example When there are several armies active over different parts of the map near territories which are marked as conquer goals, the planning will tend to have them conquer the closest territories,

74 58 CHAPTER 4. ALMANSUR AI AGENTS with the larger armies getting several conquer orders in a row as they conquer their territories much faster than small armies and thus are unlocked sooner. An example of this planning s result is shown in figure 4.9. Figure 4.9: Example of orders given by the AI for a turn 4.6 AI configuration parameters Due to the dynamic requirements of the AI, there are a lot of easy adjustable parameters which dramatically influence how the AI plays. Most of them influence the goals that the strategy agent gives to the other agents, but there are some that have a more specific impact on the game. The Upkeep Goal parameter determines how much % of the current turn s production the Land s upkeep is allowed to rise to while recruiting. There are two different values for when the Land is at peace and at war. Currently, while at peace this value is 100%, while at war it is 200%. This value directly influences how much the Land recruits, for instance if the current turn s gold production was 100 and the army s upkeep was 50, while at peace the Land can only recruit an army that increases the upkeep to 100 at most. If the Land was at war in the same situation, the recruitment would be allowed up to 200 upkeep, thus increasing the military might of the Land further when it is most needed at the expense of its resource reserves. The Resource Goal parameters control the threshold for activating resource goals for each resource - if any of the Land s resources drop below these thresholds a resource

75 4.6. AI CONFIGURATION PARAMETERS 59 goal for that particular resource is created for the turn. This helps keeping reserves of required resources while not priorizing excess resources as well. The Retreat Threshold parameter dictates under which size an army should retreat to the capital as described before on the military command rules. Currently 500 is found to be a good number under which armies are no longer able to operate and should retreat back to the Land s capital. The Optimistic Conquer Time parameter estimates how much time an average army takes to conquer an average territory in optimal conditions. Five days were found to be a good average after testing. The Conquers Per Turn parameter dictates how many enemy/neutral territories are set as goals for the Land s armies to conquer every turn. Six was tested to be a good number that hit a balance between spreading the Land s armies too thin and having them be stopped most of the time. The Production Multiplier parameter controls the degree to which production values are multiplied in order to be comparable to resource values. Five was tested to strike a good balance between production values being overweighted, thus leading the AI to spend all its resources every time and between production values being underweighted, thus leading the AI to not spend its resources for facilities even when it would be wise to do so. The Goal Constant parameter controls the degree to which goal resources are valued over non-goal resources when evaluating an economic state. Five was tested to strike a good balance between overweighting the goal resources, thus leading other important resources to be completely ignored and between underweighting the goal resources, thus leading them to not be priorized as much as they should.

76 60 CHAPTER 4. ALMANSUR AI AGENTS

77 Chapter 5 Results As mentioned in the Introduction, this work s main objective is to create AI capable of playing like human players in the first 10 turns of an Almansur game. In order to measure how well that objective was accomplished as well as detect and fix any problems with the AI, several test games were made. The results obtained from these tests will be presented in detail in this chapter, in the form of comparisons between human, AI and inactive player s performance metrics. Some of these metrics are: 1. Victory Points measure how powerful a Land is, overall, at any given point. They are created by adding army points, territory points, asset points and political points together. 2. Army Points and Territory Points measure how powerful at any given point a Land s army and economy (as territory points are created from adding the value of territories as well as of the facilities in them) are, respectively. 3. Asset Points and Political Points represent the amount of points a Land gains from its coffer resources and (if applicable) from being the leader of an alliance. 4. Gold Income and Gold Upkeep measure how much gold a Land produces and consumes, respectively, every turn. 5. Construction Points and Recruitment Points measure how much a Land has invested on its economy (through facilities) on a particular turn and how much a Land has invested 61

78 62 CHAPTER 5. RESULTS on its army on a particular turn, respectively. These metrics will be presented in charts depicting their value in the context of specific games created to test Almansur 2.2 s release. Even though this work s main objective, as stated before, is to have AI players play like human players in the first 10 turns of the game, this chapter s charts will present statistics for the first 20 turns of each group of players (human, AI and inactive) on the test games, allowing for more encompassing analysis. Inactive players, as mentioned in the Introduction, are human players who play less often than active players, to the point that they don t check the status of their Land every turn. Thus, the inactive players presented in this chapter are players who did not check the status of their respective Lands for five or more turns during the game they were playing. The games analyzed are ordered from the first game played with AI to the last (that is, Static Test I was done before Dynamic Test I which was done before the Dynamic Test II and so on). Both the game and the AI had changes throughout the testing, which will be explained in the context of the obtained results. Due to Almansur s long duration scenarios (a typical scenario lasts up to three months) and low amount of testers (the typical test game had around 18 players, split between humans and AIs), only four games were able to be completed up to turn 20 in order to be available for study in this chapter. The human players testing Almansur 2.2 (and thus figuring in this study) were among the most experienced players in the community, and should thus be considered expert players. Expert players are, as the name indicates, players who have played Almansur regularly for six months or more and know most of the tactics/strategies in the game as well as how to make the most of their resources and special Land characteristics. It should also be noted that, while players were never given direct information about which Lands were human or AI-controlled, they would usually be able to guess after a few turns due to the absense of diplomatic activity from the AI players.

79 5.1. STATIC HISTORICAL GAME Static Historical Game The AI was first tested on one static historical scenario. These scenarios use pre-generated, static maps based on historical timelines such as the almoravid occupation of the iberian peninsula or 12th century France. Rather than having distinct races, these games have distinct cultures of humans being controlled by the players who subscribe to the game (using figure 5.1 s interface), choosing in the process which historical Land they wish to play. The main goals of this first test were: 1. Checking the AI system for software bugs that might impact its playing performance. 2. Identifying flaws in the AI s playstyle that could be improved with parameter changes. 3. Assessing the AI s capability to play in static historical games (where everyone plays the human race and have large Lands from turn 0). Figure 5.1: Static Game signup example Static Test I (STI) This particular test game was played on a scenario based on the map of France in the 12th century. It had 13 (active) human players, 17 AI-controlled players and 2 inactive players. The game took about a month to complete, with one turn processing every day except on weekends for a total of 30 turns. Players in this game started with an average of 8.3 territories

Age of Wonders I Quick Start Guide

Age of Wonders I Quick Start Guide Age of Wonders I Quick Start Guide Thank you very much for purchasing this Age of Wonders Game. This quick starting guide helps you get acquainted with all the basic controls of the game. Getting Started

More information

Windows Server 2003 migration: Your three-phase action plan to reach the finish line

Windows Server 2003 migration: Your three-phase action plan to reach the finish line WHITE PAPER Windows Server 2003 migration: Your three-phase action plan to reach the finish line Table of contents Executive summary...2 Windows Server 2003 and the big migration question...3 If only migration

More information

Why Semantic Analysis is Better than Sentiment Analysis. A White Paper by T.R. Fitz-Gibbon, Chief Scientist, Networked Insights

Why Semantic Analysis is Better than Sentiment Analysis. A White Paper by T.R. Fitz-Gibbon, Chief Scientist, Networked Insights Why Semantic Analysis is Better than Sentiment Analysis A White Paper by T.R. Fitz-Gibbon, Chief Scientist, Networked Insights Why semantic analysis is better than sentiment analysis I like it, I don t

More information

FAQ Exodus Proxima Centauri 07/2013

FAQ Exodus Proxima Centauri 07/2013 Resources & Population Q: Do I pay tax for the resources I had before together with the new ones? A: No, it is a tax per income, not per fortune. You pay tax for what you get fresh that turn. Q: My stash

More information

Problem of the Month: Fair Games

Problem of the Month: Fair Games Problem of the Month: The Problems of the Month (POM) are used in a variety of ways to promote problem solving and to foster the first standard of mathematical practice from the Common Core State Standards:

More information

DM810 Computer Game Programming II: AI. Lecture 11. Decision Making. Marco Chiarandini

DM810 Computer Game Programming II: AI. Lecture 11. Decision Making. Marco Chiarandini DM810 Computer Game Programming II: AI Lecture 11 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Resume Decision trees State Machines Behavior trees Fuzzy

More information

Making Decisions in Chess

Making Decisions in Chess Making Decisions in Chess How can I find the best move in a position? This is a question that every chess player would like to have answered. Playing the best move in all positions would make someone invincible.

More information

How To Assess Soccer Players Without Skill Tests. Tom Turner, OYSAN Director of Coaching and Player Development

How To Assess Soccer Players Without Skill Tests. Tom Turner, OYSAN Director of Coaching and Player Development How To Assess Soccer Players Without Skill Tests. Tom Turner, OYSAN Director of Coaching and Player Development This article was originally created for presentation at the 1999 USYSA Workshop in Chicago.

More information

Project Group Applied Functional Programming

Project Group Applied Functional Programming Project Group Applied Functional Programming Web Development with Haskell Meike Grewing, Lukas Heidemann, Fabian Thorand and Fabian Zaiser Informatik, Universität Bonn, Germany Abstract The goal of the

More information

CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team

CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team Lecture Summary In this lecture, we learned about the ADT Priority Queue. A

More information

Beating the MLB Moneyline

Beating the MLB Moneyline Beating the MLB Moneyline Leland Chen llxchen@stanford.edu Andrew He andu@stanford.edu 1 Abstract Sports forecasting is a challenging task that has similarities to stock market prediction, requiring time-series

More information

Computer & Video Game Genres. CSCI 130 Computer Game Design Prof. Jason Fritts

Computer & Video Game Genres. CSCI 130 Computer Game Design Prof. Jason Fritts Computer & Video Game Genres CSCI 130 Computer Game Design Prof. Jason Fritts List of Genres Action Adventure Action-Adventure Adventure Role-Playing (RPGs( RPGs) Simulation Strategy Casual Massively Multiplayer

More information

Service Quality Management The next logical step by James Lochran

Service Quality Management The next logical step by James Lochran www.pipelinepub.com Volume 4, Issue 2 Service Quality Management The next logical step by James Lochran Service Quality Management (SQM) is the latest in the long list of buzz words floating around the

More information

Colored Hats and Logic Puzzles

Colored Hats and Logic Puzzles Colored Hats and Logic Puzzles Alex Zorn January 21, 2013 1 Introduction In this talk we ll discuss a collection of logic puzzles/games in which a number of people are given colored hats, and they try

More information

15-466 Computer Game Programming Intelligence I: Basic Decision-Making Mechanisms

15-466 Computer Game Programming Intelligence I: Basic Decision-Making Mechanisms 15-466 Computer Game Programming Intelligence I: Basic Decision-Making Mechanisms Maxim Likhachev Robotics Institute Carnegie Mellon University AI Architecture from Artificial Intelligence for Games by

More information

Kings of War (2015) - Official Errata

Kings of War (2015) - Official Errata Kings of War (2015) - Official Errata Page 56 - Proximity to Enemies: The Gamer s Edition text is wrong. The hardback book is correct. The hardback book text is: Remember that when charging, units don

More information

Test case design techniques II: Blackbox testing CISS

Test case design techniques II: Blackbox testing CISS Test case design techniques II: Blackbox testing Overview Black-box testing (or functional testing): Equivalence partitioning Boundary value analysis Domain analysis Cause-effect graphing Behavioural testing

More information

Developing an Artificial Intelligence Engine

Developing an Artificial Intelligence Engine Introduction Developing an Artificial Intelligence Engine Michael van Lent and John Laird Artificial Intelligence Lab University of Michigan 1101 Beal Ave. Ann Arbor, MI 48109-2110 {vanlent,laird}@umich.edu

More information

INTRUSION PREVENTION AND EXPERT SYSTEMS

INTRUSION PREVENTION AND EXPERT SYSTEMS INTRUSION PREVENTION AND EXPERT SYSTEMS By Avi Chesla avic@v-secure.com Introduction Over the past few years, the market has developed new expectations from the security industry, especially from the intrusion

More information

An Integrated Agent for Playing Real-Time Strategy Games

An Integrated Agent for Playing Real-Time Strategy Games Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008) An Integrated Agent for Playing Real-Time Strategy Games Josh M C Coy and Michael Mateas Expressive Intelligence Studio,

More information

Command & Colors: Ancients. Game Design by. Richard Borg. 3rd Edition. 2009 GMT Games, LLC

Command & Colors: Ancients. Game Design by. Richard Borg. 3rd Edition. 2009 GMT Games, LLC Command & Colors: Ancients RULE BOOK Game Design by Richard Borg 3rd Edition Command & Colors: Ancients 1. INTRODUCTION The Commands & Colors: Ancients game system allows players to portray important engagements

More information

Measuring the Performance of an Agent

Measuring the Performance of an Agent 25 Measuring the Performance of an Agent The rational agent that we are aiming at should be successful in the task it is performing To assess the success we need to have a performance measure What is rational

More information

Roulette Wheel Selection Game Player

Roulette Wheel Selection Game Player Macalester College DigitalCommons@Macalester College Honors Projects Mathematics, Statistics, and Computer Science 5-1-2013 Roulette Wheel Selection Game Player Scott Tong Macalester College, stong101@gmail.com

More information

The Top 3 Common Mistakes Men Make That Blow All Their Chances of Getting Their Ex-Girlfriend Back Which of these mistakes are you making?

The Top 3 Common Mistakes Men Make That Blow All Their Chances of Getting Their Ex-Girlfriend Back Which of these mistakes are you making? The Top 3 Common Mistakes Men Make That Blow All Their Chances of Getting Their Ex-Girlfriend Back Which of these mistakes are you making? By George Karanastasis, M.D. COPYRIGHT NOTICE THIS ELECTRONIC

More information

Utilizing the Decision Matrix to Introduce the Engineering Design Process

Utilizing the Decision Matrix to Introduce the Engineering Design Process Utilizing the Decision Matrix to Introduce the Engineering Design Process Jill N. Cheney This paper was completed and submitted in partial fulfillment of the Master Teacher Program, a 2-year faculty professional

More information

There are a number of superb online resources as well that provide excellent blackjack information as well. We recommend the following web sites:

There are a number of superb online resources as well that provide excellent blackjack information as well. We recommend the following web sites: 3. Once you have mastered basic strategy, you are ready to begin learning to count cards. By counting cards and using this information to properly vary your bets and plays, you can get a statistical edge

More information

Regular Expressions and Automata using Haskell

Regular Expressions and Automata using Haskell Regular Expressions and Automata using Haskell Simon Thompson Computing Laboratory University of Kent at Canterbury January 2000 Contents 1 Introduction 2 2 Regular Expressions 2 3 Matching regular expressions

More information

Web Data Extraction: 1 o Semestre 2007/2008

Web Data Extraction: 1 o Semestre 2007/2008 Web Data : Given Slides baseados nos slides oficiais do livro Web Data Mining c Bing Liu, Springer, December, 2006. Departamento de Engenharia Informática Instituto Superior Técnico 1 o Semestre 2007/2008

More information

Game Center Programming Guide

Game Center Programming Guide Game Center Programming Guide Contents About Game Center 8 At a Glance 9 Some Game Resources Are Provided at Runtime by the Game Center Service 9 Your Game Displays Game Center s User Interface Elements

More information

Technology Update White Paper. High Speed RAID 6. Powered by Custom ASIC Parity Chips

Technology Update White Paper. High Speed RAID 6. Powered by Custom ASIC Parity Chips Technology Update White Paper High Speed RAID 6 Powered by Custom ASIC Parity Chips High Speed RAID 6 Powered by Custom ASIC Parity Chips Why High Speed RAID 6? Winchester Systems has developed High Speed

More information

Managerial Economics Prof. Trupti Mishra S.J.M. School of Management Indian Institute of Technology, Bombay. Lecture - 13 Consumer Behaviour (Contd )

Managerial Economics Prof. Trupti Mishra S.J.M. School of Management Indian Institute of Technology, Bombay. Lecture - 13 Consumer Behaviour (Contd ) (Refer Slide Time: 00:28) Managerial Economics Prof. Trupti Mishra S.J.M. School of Management Indian Institute of Technology, Bombay Lecture - 13 Consumer Behaviour (Contd ) We will continue our discussion

More information

PART III. OPS-based wide area networks

PART III. OPS-based wide area networks PART III OPS-based wide area networks Chapter 7 Introduction to the OPS-based wide area network 7.1 State-of-the-art In this thesis, we consider the general switch architecture with full connectivity

More information

CAD/ CAM Prof. P. V. Madhusudhan Rao Department of Mechanical Engineering Indian Institute of Technology, Delhi Lecture No. # 03 What is CAD/ CAM

CAD/ CAM Prof. P. V. Madhusudhan Rao Department of Mechanical Engineering Indian Institute of Technology, Delhi Lecture No. # 03 What is CAD/ CAM CAD/ CAM Prof. P. V. Madhusudhan Rao Department of Mechanical Engineering Indian Institute of Technology, Delhi Lecture No. # 03 What is CAD/ CAM Now this lecture is in a way we can say an introduction

More information

cprax Internet Marketing

cprax Internet Marketing cprax Internet Marketing cprax Internet Marketing (800) 937-2059 www.cprax.com Table of Contents Introduction... 3 What is Digital Marketing Exactly?... 3 7 Digital Marketing Success Strategies... 4 Top

More information

Sequential lmove Games. Using Backward Induction (Rollback) to Find Equilibrium

Sequential lmove Games. Using Backward Induction (Rollback) to Find Equilibrium Sequential lmove Games Using Backward Induction (Rollback) to Find Equilibrium Sequential Move Class Game: Century Mark Played by fixed pairs of players taking turns. At each turn, each player chooses

More information

Settlers of Catan Analysis

Settlers of Catan Analysis Math 401 Settlers of Catan Analysis Peter Keep. 12/9/2010 Settlers of Catan Analysis Settlers of Catan is a board game developed by Klaus Teber, a German board game designer. Settlers is described by Mayfair

More information

Principles of Soccer

Principles of Soccer What criteria can we use to tell how well our team is playing? Analysis of soccer always starts out with the same question, Who has the ball? Principle #1: Possession If our team has possession of the

More information

EN AVANT! EN MASSE! Game Control Rules August 2008

EN AVANT! EN MASSE! Game Control Rules August 2008 EN AVANT! EN MASSE! Game Control Rules August 2008 A One Brain Cell Diceless Divisional Level Napoleonic Battle System Introduction Based on the En Avant! diceless battalion level game, this takes the

More information

BSc in Artificial Intelligence and Computer Science ABDAL MOHAMED

BSc in Artificial Intelligence and Computer Science ABDAL MOHAMED ABDAL MOHAMED Sections 1. History of AI in Racing Games 2. Neural Networks in Games History Gran Trak 10 Single-player racing arcade game released by Atari in 1974 Did not have any AI Pole Position Single-

More information

GOAL-BASED INTELLIGENT AGENTS

GOAL-BASED INTELLIGENT AGENTS International Journal of Information Technology, Vol. 9 No. 1 GOAL-BASED INTELLIGENT AGENTS Zhiqi Shen, Robert Gay and Xuehong Tao ICIS, School of EEE, Nanyang Technological University, Singapore 639798

More information

Web Forms for Marketers 2.3 for Sitecore CMS 6.5 and

Web Forms for Marketers 2.3 for Sitecore CMS 6.5 and Web Forms for Marketers 2.3 for Sitecore CMS 6.5 and later User Guide Rev: 2013-02-01 Web Forms for Marketers 2.3 for Sitecore CMS 6.5 and later User Guide A practical guide to creating and managing web

More information

BPM: Chess vs. Checkers

BPM: Chess vs. Checkers BPM: Chess vs. Checkers Jonathon Struthers Introducing the Games Business relies upon IT systems to perform many of its tasks. While many times systems don t really do what the business wants them to do,

More information

These Retail Specific Programs Are Guaranteed To Be A Hit With Your Group!

These Retail Specific Programs Are Guaranteed To Be A Hit With Your Group! These Retail Specific Programs Are Guaranteed To Be A Hit With Your Group! NEW! Marketing Program: Power Promotions - Drive More Traffic, Build Customer Loyalty and Make More Money With Promotions and

More information

Special Notice. Rules. Weiss Schwarz Comprehensive Rules ver. 1.64 Last updated: October 15 th 2014. 1. Outline of the Game

Special Notice. Rules. Weiss Schwarz Comprehensive Rules ver. 1.64 Last updated: October 15 th 2014. 1. Outline of the Game Weiss Schwarz Comprehensive Rules ver. 1.64 Last updated: October 15 th 2014 Contents Page 1. Outline of the Game. 1 2. Characteristics of a Card. 2 3. Zones of the Game... 4 4. Basic Concept... 6 5. Setting

More information

Fast Play Napoleonic Wargame Rules 1.1 by Jon Linney

Fast Play Napoleonic Wargame Rules 1.1 by Jon Linney Fast Play Napoleonic Wargame Rules 1.1 by Jon Linney These rules provide a fast paced game with simple rule mechanisms that allow players to concentrate on their tactics and enjoy the 'look' of their tabletop

More information

War! Age of Imperialism Version 27

War! Age of Imperialism Version 27 GAME COMPONENTS...2 THE GAME BOARD... 2 THE PIECES...2 THE MARKERS... 2 THE DICE... 2 REQUIRED ADDITIONAL COMPONENTS... 2 GAME SET UP...2 TURN SEQUENCE...2 PRODUCTION PHASE... 3 Collect and DP Income...3

More information

An Integrated Interface to Design Driving Simulation Scenarios

An Integrated Interface to Design Driving Simulation Scenarios An Integrated Interface to Design Driving Simulation Scenarios Salvador Bayarri, Marcos Fernandez, Ignacio Pareja and Inmaculada Coma Instituto Universitario de Trafico y Seguridad Vial (INTRAS). Instituto

More information

IFS-8000 V2.0 INFORMATION FUSION SYSTEM

IFS-8000 V2.0 INFORMATION FUSION SYSTEM IFS-8000 V2.0 INFORMATION FUSION SYSTEM IFS-8000 V2.0 Overview IFS-8000 v2.0 is a flexible, scalable and modular IT system to support the processes of aggregation of information from intercepts to intelligence

More information

Lab 11. Simulations. The Concept

Lab 11. Simulations. The Concept Lab 11 Simulations In this lab you ll learn how to create simulations to provide approximate answers to probability questions. We ll make use of a particular kind of structure, called a box model, that

More information

Motivations of Play in Online Games. Nick Yee. Department of Communication. Stanford University

Motivations of Play in Online Games. Nick Yee. Department of Communication. Stanford University Motivations of Play in Online Games Nick Yee Department of Communication Stanford University Full reference: Yee, N. (2007). Motivations of Play in Online Games. Journal of CyberPsychology and Behavior,

More information

Talking to our children about Violence and Terrorism: Living in Anxious times

Talking to our children about Violence and Terrorism: Living in Anxious times Talking to our children about Violence and Terrorism: Living in Anxious times Living in Anxious Times: Introductory Remarks Since the September 11 attack America has changed. Children and adults alike

More information

Introduction Solvability Rules Computer Solution Implementation. Connect Four. March 9, 2010. Connect Four

Introduction Solvability Rules Computer Solution Implementation. Connect Four. March 9, 2010. Connect Four March 9, 2010 is a tic-tac-toe like game in which two players drop discs into a 7x6 board. The first player to get four in a row (either vertically, horizontally, or diagonally) wins. The game was first

More information

Operations and Supply Chain Management Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology Madras

Operations and Supply Chain Management Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology Madras Operations and Supply Chain Management Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology Madras Lecture - 41 Value of Information In this lecture, we look at the Value

More information

The Ultimate WISP Customer Experience. A White Paper by Leon Hubby Co-Founder, Mesh Networks

The Ultimate WISP Customer Experience. A White Paper by Leon Hubby Co-Founder, Mesh Networks The Ultimate WISP Customer Experience A White Paper by Leon Hubby Co-Founder, Mesh Networks Contents Introduction... 3 At Issue The Problem... 3 Current Protocol- Not a Solution... 3 Exceeding Expectations-

More information

How to Use Problem-Solving Simulations to Improve Knowledge, Skills, and Teamwork

How to Use Problem-Solving Simulations to Improve Knowledge, Skills, and Teamwork How to Use Problem-Solving Simulations to Improve Knowledge, Skills, and Teamwork By Janet L. Szumal, Ph.D. Human Synergistics/Center for Applied Research, Inc. Reprinted from Mel Silberman and Pat Philips

More information

Planning Guide. Grade 6 Factors and Multiples. Number Specific Outcome 3

Planning Guide. Grade 6 Factors and Multiples. Number Specific Outcome 3 Mathematics Planning Guide Grade 6 Factors and Multiples Number Specific Outcome 3 This Planning Guide can be accessed online at: http://www.learnalberta.ca/content/mepg6/html/pg6_factorsmultiples/index.html

More information

Current California Math Standards Balanced Equations

Current California Math Standards Balanced Equations Balanced Equations Current California Math Standards Balanced Equations Grade Three Number Sense 1.0 Students understand the place value of whole numbers: 1.1 Count, read, and write whole numbers to 10,000.

More information

Easy Casino Profits. Congratulations!!

Easy Casino Profits. Congratulations!! Easy Casino Profits The Easy Way To Beat The Online Casinos Everytime! www.easycasinoprofits.com Disclaimer The authors of this ebook do not promote illegal, underage gambling or gambling to those living

More information

IAI : Expert Systems

IAI : Expert Systems IAI : Expert Systems John A. Bullinaria, 2005 1. What is an Expert System? 2. The Architecture of Expert Systems 3. Knowledge Acquisition 4. Representing the Knowledge 5. The Inference Engine 6. The Rete-Algorithm

More information

ACL Soccer 4 v 4 Small Sided Games (SSG s)

ACL Soccer 4 v 4 Small Sided Games (SSG s) KEY TO THE DIAGRAMS Introduction In recent years, the 4v4 method has rapidly increased in popularity however this method is certainly not a new one. The method was introduced by the Dutch Football Association

More information

Laboratory work in AI: First steps in Poker Playing Agents and Opponent Modeling

Laboratory work in AI: First steps in Poker Playing Agents and Opponent Modeling Laboratory work in AI: First steps in Poker Playing Agents and Opponent Modeling Avram Golbert 01574669 agolbert@gmail.com Abstract: While Artificial Intelligence research has shown great success in deterministic

More information

The Taxman Game. Robert K. Moniot September 5, 2003

The Taxman Game. Robert K. Moniot September 5, 2003 The Taxman Game Robert K. Moniot September 5, 2003 1 Introduction Want to know how to beat the taxman? Legally, that is? Read on, and we will explore this cute little mathematical game. The taxman game

More information

BASIC RULES OF CHESS

BASIC RULES OF CHESS BASIC RULES OF CHESS Introduction Chess is a game of strategy believed to have been invented more then 00 years ago in India. It is a game for two players, one with the light pieces and one with the dark

More information

Whatever the specifics of a plan may be, the following are key principles to make the plan most effective:

Whatever the specifics of a plan may be, the following are key principles to make the plan most effective: Behavior Management Principles For the ADHD Child What I would like to talk about in the last part of this presentation is on the behavior management principles for the ADHD child. In order to get specific

More information

Planning a Successful Visual Basic 6.0 to.net Migration: 8 Proven Tips

Planning a Successful Visual Basic 6.0 to.net Migration: 8 Proven Tips Planning a Successful Visual Basic 6.0 to.net Migration: 8 Proven Tips Jose A. Aguilar January 2009 Introduction Companies currently using Visual Basic 6.0 for application development are faced with the

More information

Layered Approach to Development of OO War Game Models Using DEVS Framework

Layered Approach to Development of OO War Game Models Using DEVS Framework Layered Approach to Development of OO War Game Models Using DEVS Framework Chang Ho Sung*, Su-Youn Hong**, and Tag Gon Kim*** Department of EECS KAIST 373-1 Kusong-dong, Yusong-gu Taejeon, Korea 305-701

More information

Making Sense of the Mayhem: Machine Learning and March Madness

Making Sense of the Mayhem: Machine Learning and March Madness Making Sense of the Mayhem: Machine Learning and March Madness Alex Tran and Adam Ginzberg Stanford University atran3@stanford.edu ginzberg@stanford.edu I. Introduction III. Model The goal of our research

More information

Disrupting Class How disruptive innovation will change the way the world learns

Disrupting Class How disruptive innovation will change the way the world learns Disrupting Class How disruptive innovation will change the way the world learns Clayton Christensen, Michael B Horn Curtis W Johnson Mc Graw Hill, 2008 Introduction This book is about how to reform the

More information

Mobile App Design Project #1 Java Boot Camp: Design Model for Chutes and Ladders Board Game

Mobile App Design Project #1 Java Boot Camp: Design Model for Chutes and Ladders Board Game Mobile App Design Project #1 Java Boot Camp: Design Model for Chutes and Ladders Board Game Directions: In mobile Applications the Control Model View model works to divide the work within an application.

More information

Near Optimal Solutions

Near Optimal Solutions Near Optimal Solutions Many important optimization problems are lacking efficient solutions. NP-Complete problems unlikely to have polynomial time solutions. Good heuristics important for such problems.

More information

Improving the Performance of a Computer-Controlled Player in a Maze Chase Game using Evolutionary Programming on a Finite-State Machine

Improving the Performance of a Computer-Controlled Player in a Maze Chase Game using Evolutionary Programming on a Finite-State Machine Improving the Performance of a Computer-Controlled Player in a Maze Chase Game using Evolutionary Programming on a Finite-State Machine Maximiliano Miranda and Federico Peinado Departamento de Ingeniería

More information

A Sarsa based Autonomous Stock Trading Agent

A Sarsa based Autonomous Stock Trading Agent A Sarsa based Autonomous Stock Trading Agent Achal Augustine The University of Texas at Austin Department of Computer Science Austin, TX 78712 USA achal@cs.utexas.edu Abstract This paper describes an autonomous

More information

Artificial Intelligence in Retail Site Selection

Artificial Intelligence in Retail Site Selection Artificial Intelligence in Retail Site Selection Building Smart Retail Performance Models to Increase Forecast Accuracy By Richard M. Fenker, Ph.D. Abstract The term Artificial Intelligence or AI has been

More information

The E-Myth Revisited By Michael E. Gerber

The E-Myth Revisited By Michael E. Gerber By Michael E. Gerber Introduction o Over 1 million new businesses are started each year in the U.S. o At least 40% will not make it through the first year o Within five years, more than 80% will have failed

More information

Test Automation Architectures: Planning for Test Automation

Test Automation Architectures: Planning for Test Automation Test Automation Architectures: Planning for Test Automation Douglas Hoffman Software Quality Methods, LLC. 24646 Heather Heights Place Saratoga, California 95070-9710 Phone 408-741-4830 Fax 408-867-4550

More information

Predictive Act-R (PACT-R)

Predictive Act-R (PACT-R) Predictive Act-R (PACT-R) Using A Physics Engine and Simulation for Physical Prediction in a Cognitive Architecture David Pentecost¹, Charlotte Sennersten², Robert Ollington¹, Craig A. Lindley², Byeong

More information

Random Fibonacci-type Sequences in Online Gambling

Random Fibonacci-type Sequences in Online Gambling Random Fibonacci-type Sequences in Online Gambling Adam Biello, CJ Cacciatore, Logan Thomas Department of Mathematics CSUMS Advisor: Alfa Heryudono Department of Mathematics University of Massachusetts

More information

User research for information architecture projects

User research for information architecture projects Donna Maurer Maadmob Interaction Design http://maadmob.com.au/ Unpublished article User research provides a vital input to information architecture projects. It helps us to understand what information

More information

Appendix B Data Quality Dimensions

Appendix B Data Quality Dimensions Appendix B Data Quality Dimensions Purpose Dimensions of data quality are fundamental to understanding how to improve data. This appendix summarizes, in chronological order of publication, three foundational

More information

LESSON 7. Leads and Signals. General Concepts. General Introduction. Group Activities. Sample Deals

LESSON 7. Leads and Signals. General Concepts. General Introduction. Group Activities. Sample Deals LESSON 7 Leads and Signals General Concepts General Introduction Group Activities Sample Deals 330 More Commonly Used Conventions in the 21st Century General Concepts This lesson covers the defenders conventional

More information

Shadows over Camelot FAQ 1.0 Oct 12, 2005

Shadows over Camelot FAQ 1.0 Oct 12, 2005 Shadows over Camelot FAQ 1.0 Oct 12, 2005 The following FAQ lists some of the most frequently asked questions surrounding the Shadows over Camelot boardgame. This list will be revised and expanded by the

More information

Managing Agile Projects in TestTrack GUIDE

Managing Agile Projects in TestTrack GUIDE Managing Agile Projects in TestTrack GUIDE Table of Contents Introduction...1 Automatic Traceability...2 Setting Up TestTrack for Agile...6 Plan Your Folder Structure... 10 Building Your Product Backlog...

More information

Introduction. Below is a list of benefits for the 4v4 method

Introduction. Below is a list of benefits for the 4v4 method KEY TO THE DIAGRAMS Introduction There are many debates on how best to coach the next generation of football players. This manual is putting forward the case for the 4v4 games method. In recent years,

More information

Network Security. Mobin Javed. October 5, 2011

Network Security. Mobin Javed. October 5, 2011 Network Security Mobin Javed October 5, 2011 In this class, we mainly had discussion on threat models w.r.t the class reading, BGP security and defenses against TCP connection hijacking attacks. 1 Takeaways

More information

Using Emergent Behavior to Improve AI in Video Games

Using Emergent Behavior to Improve AI in Video Games Noname manuscript No. (will be inserted by the editor) Using Emergent Behavior to Improve AI in Video Games Janne Parkkila Received: 21.01.2011 / Accepted: date Abstract Artificial Intelligence is becoming

More information

NPV Versus IRR. W.L. Silber -1000 0 0 +300 +600 +900. We know that if the cost of capital is 18 percent we reject the project because the NPV

NPV Versus IRR. W.L. Silber -1000 0 0 +300 +600 +900. We know that if the cost of capital is 18 percent we reject the project because the NPV NPV Versus IRR W.L. Silber I. Our favorite project A has the following cash flows: -1 + +6 +9 1 2 We know that if the cost of capital is 18 percent we reject the project because the net present value is

More information

Using Software Agents to Simulate How Investors Greed and Fear Emotions Explain the Behavior of a Financial Market

Using Software Agents to Simulate How Investors Greed and Fear Emotions Explain the Behavior of a Financial Market Using Software Agents to Simulate How Investors Greed and Fear Emotions Explain the Behavior of a Financial Market FILIPPO NERI University of Naples Department of Computer Science 80100 Napoli ITALY filipponeri@yahoo.com

More information

Test Automation Framework

Test Automation Framework Test Automation Framework Rajesh Popli Manager (Quality), Nagarro Software Pvt. Ltd., Gurgaon, INDIA rajesh.popli@nagarro.com ABSTRACT A framework is a hierarchical directory that encapsulates shared resources,

More information

IMGD 1001 - The Game Development Process: Fun and Games

IMGD 1001 - The Game Development Process: Fun and Games IMGD 1001 - The Game Development Process: Fun and Games by Robert W. Lindeman (gogo@wpi.edu) Kent Quirk (kent_quirk@cognitoy.com) (with lots of input from Mark Claypool!) Outline What is a Game? Genres

More information

6.080 / 6.089 Great Ideas in Theoretical Computer Science Spring 2008

6.080 / 6.089 Great Ideas in Theoretical Computer Science Spring 2008 MIT OpenCourseWare http://ocw.mit.edu 6.080 / 6.089 Great Ideas in Theoretical Computer Science Spring 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

More information

How To Play Go Lesson 1: Introduction To Go

How To Play Go Lesson 1: Introduction To Go How To Play Go Lesson 1: Introduction To Go 1.1 About The Game Of Go Go is an ancient game originated from China, with a definite history of over 3000 years, although there are historians who say that

More information

Multi-user Collaboration with Autodesk Revit Worksharing

Multi-user Collaboration with Autodesk Revit Worksharing AUTODESK REVIT WHITE PAPER Multi-user Collaboration with Autodesk Revit Worksharing Contents Contents... 1 Autodesk Revit Worksharing... 2 Starting Your First Multi-user Project... 2 Autodesk Revit Worksets...

More information

A qualitative examination of online gambling culture among college students: Factors influencing participation, maintenance and cessation

A qualitative examination of online gambling culture among college students: Factors influencing participation, maintenance and cessation A qualitative examination of online gambling culture among college students: Factors influencing participation, maintenance and cessation R I N A G U P T A, J E F F D E R E V E N S K Y & M I C H A E L

More information

THE MANAGEMENT OF INTELLECTUAL CAPITAL

THE MANAGEMENT OF INTELLECTUAL CAPITAL THE MANAGEMENT OF INTELLECTUAL CAPITAL Many companies have come to realize that market value multiples associated with its intangible assets (patents, trade-marks, trade secrets, brandings, etc.) are often

More information

COMPETITIVE INTELLIGENCE

COMPETITIVE INTELLIGENCE COMPETITIVE INTELLIGENCE GOVOREANU Alexandru MORA Andreea ŞERBAN Anca Abstract: There are many challenges to face in this century. It s an era of information. Those who have the best information are going

More information

MAP-ADAPTIVE ARTIFICIAL INTELLIGENCE FOR VIDEO GAMES

MAP-ADAPTIVE ARTIFICIAL INTELLIGENCE FOR VIDEO GAMES MAP-ADAPTIVE ARTIFICIAL INTELLIGENCE FOR VIDEO GAMES Laurens van der Blom, Sander Bakkes and Pieter Spronck Universiteit Maastricht MICC-IKAT P.O. Box 616 NL-6200 MD Maastricht The Netherlands e-mail:

More information

WRITING PROOFS. Christopher Heil Georgia Institute of Technology

WRITING PROOFS. Christopher Heil Georgia Institute of Technology WRITING PROOFS Christopher Heil Georgia Institute of Technology A theorem is just a statement of fact A proof of the theorem is a logical explanation of why the theorem is true Many theorems have this

More information

Networked. Field Services

Networked. Field Services Networked Field Services Table of Contents Overview.............................................3 Visibility and Enterprise Mobility.............................3 The New Networked Organization............................4

More information