ABSTRACTS OF PAPERS

Analytical Determination of Interoperability between Distributed Simulations
S. Y. Harmon
harmon@ati-sd.com
Advanced Telecommunications, Inc.

The basis of an analytical technique has been developed which determines if two simulations can interoperate together. Two functionally correct simulations can interoperate correctly only if no artifacts occur simply because the simulations interact. Three artifact classes have been defined: accuracy inconsistencies, timing inconsistencies and ordering inconsistencies. These artifacts create simulation results which do not correspond with outcomes of the corresponding physical processes with exactly the same initial states and boundary conditions.

A simulation of a physical process consists of two separate descriptions: an abstract representation or model of the physical process and an implementation of that model within a computing environment. Each description has different generic characteristics which affect interoperability differently. Currently, ten model characteristics and four implementation characteristics have been identified.

Two simulations interact when a subset of the values of the dependent variable set from one simulation becomes a subset of the values for the independent variable set of the other simulation. The two interacting simulations interoperate correctly when their interactions do not create any of the interoperability artifacts described above. Constraints on the relationships between the generic characteristics of the models and their implementations have been derived from propositions defining the nature of artifacts and simulation characteristics. Currently, eight model constraints and four implementation constraints have been derived. These constraints quantitatively define the conditions which must exist to guarantee that interoperability artifacts cannot occur.

This work represents the beginning of a theory which analytically describes the phenomenon of interoperability between distributed simulations. All of the propositions upon which this work is founded are empirically testable and have a well defined general scope. All of the constraints which have been derived from these propositions form empirically testable hypotheses. Additional characteristics and constraints are known to exist and effort is ongoing to extend this analytical development in order to achieve completeness.


Use of Computer Image Generators in Distributed Simulation Exercises
Jason Novak and Joe Jennings
jnovak@mitre.org
The MITRE Corporation

In FY95 The MITRE Corporation (MITRE) was tasked to assist the U.S. Army Training and Doctrine Command (TRADOC) Battle Lab Integration Technology and Concepts Directorate (BLIT-CD) in examining the feasibility of conducting distributed simulation exercises with simulators using dissimilar, or heterogeneous, computer image generators (IGs). This paper presents the findings resulting from MITRE's investigation.

The specific question that MITRE was asked to address is whether two simulators using different IGs in a Distributed Interactive Simulation (DIS) exercise can interact in such a way that neither simulator has an unrealistic advantage due solely to the differences in the visual scenes presented by the IGs. Such differences could invalidate the distributed exercise as an analysis or training tool. This concern is often referred to as the "fair fight" issue. Although there are many factors that could affect the ability of two simulators to engage in a fair fight, this paper addresses only the impact of the visual scene provided by the IG.

BLIT-CD is interested in IGs because of their involvement in the Battle Lab Reconfigurable Simulator Initiative (BLRSI). In the BLRSI, BLIT-CD will design and ultimately field a new generation of robust, versatile simulators that are easily reconfigurable in terms of both hardware and software. These simulators are required for a variety of applications in analysis, training and decision-making tasks. One way in which BLIT-CD anticipates that these simulators will be used is in distributed exercises. In a distributed exercise the BLRSI simulators (BLRSIMS) will be required to interact with other simulators. These other simulators will often be based on different hardware platforms than the BLRSIMS. In particular, the IGs will often be different.

It has been hypothesized that the only way to ensure a fair fight in a distributed exercise is to mandate that all of the simulators in the exercise employ the same (homogeneous) IG. If this hypothesis is correct, it has significant implications for the future development of simulators in general and the BLRSIMS in particular. Therefore, BLIT-CD tasked MITRE to examine this hypothesis.

MITRE first considered what conditions are necessary for a fair fight to take place. The basis requirements are that all human participants should be able to see and identify all of the critical elements of the visual scene, that these elements should be correctly located in the scene relative to one another, and that no scene should contain significantly more or less information than another. Thus, to ensure a fair fight between two simulators both IGs must provide:

MITRE's preliminary analysis of the issues showed that even using homogeneous IGs does not ensure a fair fight. Asymmetric visualization between two identical IGs can occur if the range between them is great enough that the visual scenes they are viewing are at different levels of detail (LOD). It is shown in the study that the use of homogeneous IGs may be a necessary condition for a fair fight but it is not a sufficient condition. Based on the preliminary analysis, the purpose of the study was defined as shown below.

MITRE will investigate the issues associated with providing a fair fight in a DIS exercise from the perspective of the visual representation of the synthetic environment to answer the following questions:

  1. Is the use of homogeneous IGs a necessary condition to ensure a fair fight in a DIS exercise?
  2. Given that the use of homogeneous IGs is not a sufficient condition to ensure a fair fight, what other conditions are required for sufficiency?
To answer these questions it was necessary to first describe the process in which the visual scene is rendered by an IG. Section 2 of the paper provides an overview of the data and processes involved in the creation of the visual scene by an IG. This section discusses the three main elements in the development of the visual scene: the source data, the visual database, and the IG runtime rendering schemes. The purpose of this section is to describe the current state-of-the-art in image generation and to illustrate how and why the visual scene is likely to differ from IG to IG. This section provides the basis for the determination of the necessary and sufficient conditions for a fair fight.

The key findings of this study are summarized below.

MITRE concludes that unfair fights will be a fact of life in distributed exercises for some time. The impact of this situation will vary from exercise to exercise depending on a number of factors including: whether homogeneous or heterogeneous IGs are used, the complexity of the visual scenes (terrain, features and number of models), engagement ranges, and the number of dissimilar systems being simulated (dismounted infantry, vehicles, helicopters, or aircraft). For some small number of exercises, unfair fights may be avoided by using homogeneous IGs at a single LOD. For the majority of exercises the focus must be on mitigating the negative impact of those unfair fights that occur. The primary means for reducing the effects of unfair fights are through exercise control (e.g. re-instantiating unfairly killed systems) and after action reviews.

Long term efforts to require adherence to standards for database and IG development have a promise of reducing the incidence of unfair fights, although the potential for unfair fights will exists as long as IGs operate at multiple LODs.


Internal HLA Compliance
Jerry Vasend
gvasend@logicon.com
Logicon RDA

The DoD High Level Architecture Management Plan states that the primary purpose of the HLA is to address interoperability and reuse across simulations. While the existence of an HLA will greatly improve interoperability between simulations, obstacles will remain due to varying degrees of differences between simulation architectures and between a simulation's Simulation Object Model (SOM) and a given Federation Object Model (FOM). All simulation infrastructures provide a set of common services that are required to perform simulation. These services include capabilities such as time and data management. While the types of simulation services provided are common, many implementations exist with varying degrees of incompatibility that will be difficult to solve exclusively through the use of HLA. Additional complexities exist related to the management of the FOM and each SOM. The HLA states that a simulation must provide access to its SOM. This will require the use of translation software to convert between the Federation Object Model and the simulation SOM. Support for this capability in the most generic interpretation of this rule is expensive and must be maintained over time as the Federation and simulations evolve.

This paper proposes an approach to improved Federation interoperability by using an architecture that supports internal HLA compliance by using the HLA RTI as a distributed simulation kernel. Thus, while HLA does not place requirements on a simulation's internal architecture, in cases where the HLA RTI provides adequate functionality and performance for a proposed model development, the HLA RTI can serve as the model's simulation kernel. Additional capabilities required for a simulation kernel but not provided by the RTI are described. The paper also describes an approach for using HLA services such as subscriptions to manage object distribution within the proposed architecture. In addition, the paper describes a capability that allows both the SOM and FOM to be represented electronically within the simulation framework and provides translation services between the SOM and FOM through the use of a FOM/SOM Manager. Finally, the appropriateness of the HLA as a simulation kernel and the value of internal HLA compliance are addressed. Additional HLA capabilities that improve HLA applicability as a simulation kernel are recommended.

Benefits to the proposed approach include cost savings and improved interoperability. Given that a future model developer wishes to build a new model and the HLA RTI's performance characteristics are adequate to meet the model's requirements, the developer will not have to build a simulation kernel. The simulation kernel would be provided in the form of the HLA RTI as a GOTS product. The HLA RTI will provide critical capabilities such as time and data management and communications. Since the kernel is based on the RTI, interoperability with HLA will be automatic and minimal translation services will have to be developed. Conformance to the "HLA rules" will be simplified as follows:

Rule 1: Using this approach, the OMT becomes the basis for designing the simulation's object model.
Rule 2: Same as rule 1. However, additional capabilities must be provided to maintain both a FOM and a SOM. In effect, the SOM becomes the simulation's internal FOM. This capability is provided by the proposed FOM/SOM Manager.
Rule 3: This rule is neither helped or hindered by the approach.
Rule 4: Since the RTI is the simulation engine, this is automatic.
Rule 5: Same as 4 with the addition that the simulation must be tested against the HLA Interface Specification.
Rule 6: Same as 4.
Rule 7: This capability is provided by the proposed FOM/SOM Manager.
Rule 8: Same as 7.
Rule 9: Same as 7.
Rule 10: Same as 4.


Foundations of the New DIS Aggregate Protocol
Billy Foss and Robert Franceschini
BFOSS@ist.ucf.edu
Institute for Simulation and Training

Recently IST redesigned the Distributed Interactive Simulation (DIS) Aggregate Protocol to simplify the linkage of aggregate and virtual simulations. The major motivation behind the new Aggregate Protocol was to unify several implementations currently in use. We wanted to generalize the protocol, instead of tailoring it to a specific linkage. Another motivation was to include enough information to reasonably represent a unit in the virtual world. We included a Silent Entity System List to represent entities not issuing PDUs, and we included an Aggregate Marking so that commanders can recognize units on a PVD. We also wanted to generalize the aggregation and disaggregation process so both aggregate and virtual simulations can request a change of state. Overall, we wanted to create an Aggregate Protocol that would satisfy the needs of future Aggregate + Virtual Linkages.

This paper outlines the major decisions made while designing the Aggregate Protocol. It explains the role of the Aggregate Controller and why it warns simulations before changing a unit s state. It explains the Silent Aggregate System List and why aggregates of aggregates are allowed. It also explains the different states of an aggregate, and how they make efficient use of network and processor capability. This paper compares the forerunners of this design and discusses why their ideas were or were not included in the new DIS Aggregate Protocol. It also discusses the extensions we added to the protocol.


Multiple Level Federation Issues Paper
Gary N. Bundy and David W. Seidel
gbundy@mitre.org
The MITRE Corporation

This paper represents the results of investigation and analysis conducted during the initial phases of an experiment in the use of the High Level Architecture (HLA). The HLA was developed for the Department of Defense by the Defense Modeling and Simulation Office (DMSO) to establish a common high-level simulation architecture to facilitate the interoperability of all types of models and simulations among themselves and with C4I systems. The purpose of this experiment was to investigate linking models from three different levels of abstraction and prototype a HLA federation of these models. This experiment was entitled the "Multiple Level Prototype Federation (MLPF)." For each of the three levels in the prototype federation a model typical of that level was selected. Each of the simulations represent members of larger federations of simulations. The three levels and models selected for this experiment are:

In conducting an exercise within the Department of Defense, objects need to be represented at multiple levels of detail. The more detailed simulations require more resources to accomplish their modeling and consequently must be restricted to special purposes. By representing objects in an exercise with multiple levels of detail, we can use the detailed simulations to represent pockets that require greater attention to be paid to the ongoing activity. These pockets can be defined based on any of a number of characteristics (fixed geography, sphere of influence around a vehicle, sphere of influence around an activity, etc.). However, we must also model the objects with less detail for those simulations that need a complete view of their area of interest, including activity taking place in the detailed pocket of another model.

Our investigation revealed numerous issues with respect to multiple level HLA federations. Three major issues that we investigated are:

  1. Model Communication
  2. Hand-off protocols (who initiates action)
  3. Object updates (protocol for updating objects after a hand-off)
In developing a multiple level federation, protocols for communication amongst the various models must be developed. Two proposals for this issue are: A second crucial protocol issue that must be decided amongst the federation participants is how to handle hand-offs. A hand-off is when a model allows another model to control an object or a set of attributes of an object that the first model currently owns. Hand-offs will not occur consistently in a controllable manner unless the models cooperate and agree to a common protocol for performing hand-offs.

Once a hand-off has occurred, a third protocol must be decided that describes which model is responsible updating the values of the hand-off objects. There are two approaches. The first approach is that the model which created an object controls and updates the object and its attributes, regardless of which model currently has responsibility for the object. The second approach is that the model with responsibility for an object or a set of that objects attributes, updates the object or attributes for which it is responsible.

This paper discussed these three issues and discusses the advantages and disadvantages of various protocols for dealing with these issues.


HLA Testing: Separating Compliance From Interoperability
Margret Loper
margaret.loper@gtri.gatech.edu
Georgia Tech Research Institute

The High Level Architecture is part of the DoD common technical framework which facilitates the interoperability of all types of models and simulations. There are (at least) two steps to interoperability: (1) simulations need to be able to exchange data and (2) simulations need to understand the data that is exchanged. The ability of a simulation to exchange data via the common framework establish by HLA must be tested to ensure that the simulation is complying with the established design rules and interfaces. Once the framework has been tested, the ability of the simulation to understand the data exchanged must also be tested to ensure proper operation of the federation.

To accomplish this testing, a process has been developed which consists of two phases. The first phase addresses the common HLA framework, specifically whether a simulation complies with the functional elements, interfaces, and design rules that allow it to exchange data. This is known as Compliance Testing. The second phase addresses the simulations ability to understand the data and participate in a federation. This phase is called Federation Testing.

The purpose of compliance testing is to ensure that a federate conforms it's actions to the interface specification, the object management template, and the simulation and RTI rules. Compliance testing does not guarantee interoperability, rather it is the first step. Federation testing is dependent on compliance testing and includes the requirements for the simulations, compatibility among simulations in a way that matters for federations, and the federated object model (FOM) contents.

This paper will discuss the test process being developed for HLA and contrast it with other testing efforts.


MAGTF Tactical Warfare Simulation (MTWS) Interoperability Issues
Curtis L. Blais
curt@visicom.com
VisiCom Laboratories, Inc.

The Marine Air-Ground Task Force (MAGTF) Tactical Warfare Simulation (MTWS) is a computer-assisted, multi-sided warfare gaming system designed to support training of U. S. Marine Corps commanders and their staffs. In recent years, new training and operational support requirements are extending the mission of MTWS in several directions, from participation in exercises involving individual platform simulators, to participation in joint exercises involving multiple, dissimilar warfare models, to stimulation of real-world Command, Control, Communications, and Intelligence (C3I) systems. Achieving these requirements demands varying levels of interoperability, from common representation of the battlespace to exchange of data through common message formats.

MTWS development is addressing interoperability issues across these various fronts. This paper describes (1) MTWS capabilities in the Aggregate Level Simulation Protocol (ALSP) Joint Training Confederation (JTC), with particular focus on a planned approach for interfacing dissimilar ground models; (2) a planned architecture for development of a Distributed Interactive Simulation (DIS) interface; and (3) initial capabilities and planned development approach for interoperability of MTWS with Marine Corps tactical C3I systems. In each area, technical issues are discussed, pointing out benefits and limitations of the proposed approaches.


Fuzzy Sets and Neural Networks Applied in Multi-User Networked Virtual Environments
Paulo Camargo Silva
mailto:%20camargo@immd8.informatik.uni-erlangen.de
University of Erlangen-Nurnberg

Dead reckoning algorithms make predictions in multi-user networked virtual environments. However when the number of entities is very large and the behavior of these entities is complex with an intensive relationship among them the predictions are not simple to make. The mathematical description of complex entities with an intensive relationship is not a simple matter to simulate in networked environments. We suggest in this article that the structure of multi-user networked virtual environments can be made in terms of Fuzzy Cognitive Maps (FCMs). This neural network can make predictions of a very large number of complex entities with an intensive relationship among them in multi-user networked virtual environments. In this article we introduce the structure of multi-user networked virtual environments based on FCMs. Moreover we show that FCMs can approximate functions. With this approximation we show that the predictions that are made in dead reckoning algorithms can be made with FCMs by using BIOFAMs (binary input output fuzzy associative memory). We use this structure as the linguage of a logical framework. This framework is made in terms of modal logic of knowledge and belied for multi-agents. This logic is able to develop protocols in the distributed system that supports the multi-user networked virtual environment.

Return to Elecsim96 Menu