Scaling
Problems Associated With Rule-Based Decision Algorithms In Multiple-Objective
Situations -- Value-Driven Methods As An Alternative
Dr. Robert M.
Kerchner
RAND Corporation
The commonly-used methodologies for simulating decision processes in Distributed Interactive Simulation (DIS) computer-generated forces (CGFs) involve the use of rule-based systems as a means for determining the choice made by a decision maker. Rule-based methods typically do well for prototype problems, but realistic decisions are often made in environments where the decision maker is simultaneously concerned with multiple objectives. The number of rules required by a rule-based system typically grows very rapidly with the number of objectives, which may in turn grow as the number of entities that the decision-maker must consider increases. This paper suggests that the reasons for rapid growth in the number of rules can be understood by comparing common practices in formulating rules to a utility function formulation of the decision problem. A related computational experiment indicates that rapid rule growth may not be intrinsic to rule-based approaches, but may rather be associated with the way in which rule sets are developed in practice.
The main objective of this paper is to present the value-driven approach to simulating decision processes, an optimization-based technique that shares several features with chess-playing algorithms. Value-driven methods are an attractive alternative to rule-based systems in situations where the number of objectives considered by the decision-maker leads to an excessively large rule base. The complexity of a value-driven algorithm typically scales only linearly with the number of factors considered, and execution time may scale similarly.
Value driven decision simulations are not new, and while value-driven methods are occasionally used in the DIS community, a clear understanding of the methodology is not widespread. Value-driven methods have their difficulties too, and rule-based methods will be more appropriate in many cases. However, experience in some regimes, most notably engagement level air combat, confirms that in other cases value-driven systems produce more realistic and robust behavior. I hope that this paper will stimulate thought about value-driven methods, and about their advantages when scaling to large problems where practical rule based systems are difficult to develop.
A second objective of this paper is to encourage research into the correspondences between rule-based methods and value-driven methods that address the same decision problem. It appears that the correspondences are a potentially fruitful way of uncovering implicit assumptions in rules, assumptions that lead to problems should the scope of the decision problem be extended. The suggested research would hopefully improve practices and procedures for rule set development so as to help mitigate scaling problems.
Battlefield simulations can be broadly classified as either aggregate-level or entity-level simulations. Entities in constructive simulations are said to be aggregated since they contain information pertaining to a collection or group of entities. On the other hand, entities in a virtual simulation are basic entities in the sense that they are not broken down further. In recent years, there has been a push to link constructive and virtual simulations. Examples of successful linkages include BBS to SIMNET, EAGLE to SIMNET, and AWSIM to ModSaf efforts. Typically, the aggregated entities disaggregate when they cross the boundary of a "virtual playbox" inside which interactions take place at the entity level.
Our premise is that the dichotomy between aggregated states and disaggregated states is a false one. In this paper we propose a scheme, wherein each entity either maintains or furnishes on demand state information about two or more levels of aggregation by storing the relevant data of all levels simultaneously.
We present problems with traditional approaches towards aggregation such as temporal inconsistency, chain disaggregation and network flooding. Temporal inconsistency occurs when an entity performs actions in an interval of time in a simulation which it could not have done in a real-life situation. Chain disaggregation is caused when the disaggregation of an aggregated entity causes other nearby aggregated entities to disaggregate. Network flooding is a problem caused by a large volume of messages straining network capacity. We also deal with problems which we envisage will beset the simulation world such as aggregation of dissimilar entities, dynamic aggregation and the perceiver problem.
We describe a framework with which these problems could be solved or resolved better by retaining information about all levels of aggregation and maintaining consistency between them. We study the benefits and disadvantages of our scheme. Finally, we analyze the demands made by our scheme on network, memory and CPU resources, compare them with the requirements of traditional methods and propose new directions for research.
April 24 - 30
The paper illustrates a number of basic principles about aggregation and disaggregation in combat modeling by working through the mathematics and phenomenology of a concrete simplified example, ground combat taking place in a number of sectors and subsectors within a theater. The example demonstrates the importance to combat modeling, especially in this era of distributed simulation and model families, of approaching the issues of aggregation and disaggregation with care. "Care" involves a dose of theory and mathematics, coupled with experimentation, rather than the usual dash to coding. It also implies not relying solely upon intuition, because valid aggregation and disaggregation relationships are often much more complicated than original intuition would have it due to complications of competitive strategies, imperfect command-control, configurational constraints, stochastic considerations, and a variety of frictions that manifest themselves as one moves to higher levels (lower resolutions). Theoretical analysis can clarify many of the issues, but experience suggests that experimentation with higher-resolulution models is very important in any effort to develop good aggregations and to understand their limitations.
May 1 - 7
In order for CGFs to be truly autonomous it is necessary that they be able to "reason" about terrain. The task is made difficult, though, by the complexity and size of modern terrain data bases and the performance constraints of today's simulations. This paper describes an approach to not only compactly representing terrain for specific tasks of CGFs, but also to simplifying (and hopefully speeding up) terrain reasoning algorithms. In particular, a representation is described that is designed to facilitate the type of cross-country movement planning (e.g., mobility corridors, avenues of approach) that is performed by a division-level commander and his staff. The foundation of the representation is a full topology (level 3 as defined by the Vector Product Format). Two additional layers of representation are placed on top of the base layer to provide a means of abstracting topological objects (i.e., nodes, edges, faces) into domain-specific objects (e.g., objectives, no-go areas). The application of the representation to the automatic generation of mobility corridors and avenues of approach is also discussed.
May 8 - 14
Multicasting, as an alternative to broadcasting, has great potential to improve DIS scalability through reducing the demands on network bandwidth and computational resources. To incorporate multicasting into DIS, we take an eclectic approach, blending new ideas based on insights dis- cussed here with the best of previously proposed approaches. To our knowledge, no previous work has provided such a unif- ication. The simplicity and completeness of our approach make it more promising than previous ones.
In general (and in DIS in particular), the multicast problem should be considered as consisting of two parts:
Our approach to the definition and use of multicast groups consists of four steps:
Finally, we note that management of static objects is inherently related to any multicast solution. The integra- tion of static object management with our multicast solution is made possible by its static nature and use of a regular grid.
The Marine Air Ground Task Force (MAGTF) Tactical Warfare Simulation (MTWS) system is a computer-assisted warfare gaming system designed to support training of U.S. Marine Corps commanders and their staffs. The system is scheduled for fielding during Government fiscal year 1995. Requirements for the system were written over five years ago. Since that time, there has been a considerable transition in training from uni-service scenarios to multi-service, multi-national, joint warfare scenarios. The primary use of MTWS will continue to be within the Fleet Marine Force and USMC University setting. However, there are growing demands for the system to participate in joint exercises involving other constructive simulations and diverse virtual simulations. Therefore, the requirement to support exercises from Marine Expeditionary Unit (MEU) through Marine Expeditionary Force (MEF) levels is being extended to cover larger force structures with an order of magnitude increase in the number of game objects. The technical challenge is to significantly enlarge system capacity without sacrificing system performance.
Several scalability issues are being investigated to achieve this expanded capability. This paper describes hardware and software approaches and alternatives relating to the MTWS architecture and functionality. The following principal areas are discussed:
May 29 - June 4
Building a large-scale distributed military simulation that is scalable in all aspects is a very difficult task. A number of correct decisions must be made in order for a simulation to provide: correct parallel and distributed synchronization with high performance, uncompromized fidelity, scalable fundamental algorithms such as proximity detection and scene generation with environmental effects, unlimited incremental growth in functionality, scalable software engineering practices, and uses of the simulation beyond simply real-time training exercises. Furthermore, it is important to understand the basic principles of scalability in parallel and distributed simulation as a function of the number of simulated objects, the number of processing nodes, and as a function of the time dynamics describing object interactions.
All of these issues are addressed in this paper that describes the unique, patent-pending, object-oriented simulation framework called the Synchronous Parallel Environment for Emulation and Discrete-Event Simulation (SPEEDES). SPEEDES supports multiple synchronization strategies (both conservative and optimistic) in a nearly transparent manner while offering new mechanisms for limiting optimism and risk in ways that provide stability where traditional approaches have failed. The communication requirements of SPEEDES are carefully encapsulated so that communication support can be optimized for any hardware configuration, thus, providing scalable communications on any parallel hardware platform and/or shared memory system. SPEEDES also supports a reliable, scalable, and heterogeneous, interactive computing environment over wide area networks by providing a simple, easy-to-use interface for linking external modules.
SPEEDES also offers a unique scalable framework for developing discrete-event simulations because events are defined as active objects which are separately encapsulated from passive simulation objects. This strategy promotes maximum reusability since the simulation objects are kept ignorant of the simulation environment. This would not be true if active and passive methods were mixed together as many traditional "so-called" object-oriented or process-oriented strategies dictate. SPEEDES also makes a sharp separation concerning simulation object types so that separate development efforts can build and integrate their objects into a single framework without requiring extensive formal agreements on detailed interfaces and/or conventions.
June 5 - 11
In this paper, we examine the Legion metasystem, a project whose goal is to provide a general purpose software framework and design paradigm for the development of large scale distributed systems. In particular, we examine Legion's applicability to one of the highest priority software development efforts of the U.S. government: Distributed Interactive/ Interoperable Simulation(DIS). We propose that DIS be implemented over the Legion system as Legion provides viable solutions to many of the scalability challenges introduced by DIS.
The scope of the Legion project is at the nation- and world-wide level. It's basic goal is to present a single, seamless computing environment to geographically distributed users which will enable easy access to high performance computing and simplify the development of coherent, interoperable tools for distributed computing and collaboration on information-oriented projects (e.g. research, education, training). Legion is an object-oriented development environment which provides transparent employment of distributed, heterogeneous resources, a unified persistent name-space, fault tolerance, simplified performance enhancement through parallelism, a convenient user interface and an open architecture.
Like Legion, DIS is a project of global proportions. The stated goal of DIS is to provide a general purpose interactive distributed simulation system capable of supporting all phases and aspects of simulations of thousands to millions of heterogeneous entities on hardware dispersed around the world. This is no small goal, leveling serious requirements on network, processing, management and development technologies. Many of the scalability challenges of such a large, interoperable system could be made tractable by the organizational framework provided by a software infrastructure like Legion.
The potential benefits of a DIS-over-Legion implementation are numerous. The object-oriented design paradigm encouraged by Legion could be used to address software complexity scalability through encapsulation, inheritance and code-reuse. The unified Legion data-space provides a convenient abstraction on which to construct mechanisms for ensuring semantic consistency and conceptual scalability. The performance and productivity boosts made possible by Legion's easy-to-use parallelism would enable more detailed simulations and real-time "what-if" scenarios, and also speed the simulation development and analysis phases. Besides these advantages, Legion could be used as an object-encapsulation tool to support existing DIS code transitionally.
A DIS-over-Legion implementation would doubtless pose certain technical challenges. For example, Legion does not yet explicitly support real-time applications. Overcoming these challenges would be well worth the effort given the benefits enumerated above. Also, beyond the implementation of a better, more scalable version of DIS, the required advances in Legion would result in a more powerful general purpose distributed computing environment which could be useful in a wide domain of applications in addition to DIS.
June 12 - 18
ARPA has expressed a goal of expanding virtual simulation exercises to encompass 100,000 entities. The obvious challenges to meeting this goal are the electronic issues such as communications bandwidth and computing power. However, human issues are also relevant-such issues as exercise management, reducing the staffing necessary for an exercise, and maintaining realism while increasing the span of control of human operators. One of the design goals of the Command Forces (CFOR) program was to address and improve the human aspects of scalability. To that end, CFOR is implemented as follows: