Use of Computer Image Generators in Distributed Simulation Exercises

Jason Novak and Joe Jennings
The MITRE Corporation
jnovak@mitre.org

EXECUTIVE SUMMARY

In this era of declining military budgets, the United States Army has decided to make a substantial investment in simulation. In particular, the Army is using the capabilities provided by the Distributed Interactive Simulation (DIS) standards to link virtual, constructive and live simulations on a common, synthetic battlefield for training, test and evaluation, course of action analysis, and mission rehearsal. The advantage of linking dissimilar simulations together is that it makes it possible to represent many battlefield phenomena that would be difficult to represent in a single type of simulation. For example, the use of virtual simulation allows the introduction of the soldier's decision making process, with all of its variability and unpredictability, into simulation exercise. The disadvantage is that it is now more difficult to ensure that all of the players in a simulation are able to interact with each other and their environment consistently and realistically. One particular concern is that no player in a simulation should gain an advantage over other players through unrealistic interactions caused by the differences in simulations or simulators. This is often referred to as the fair fight issue, although the concern is not really over fairness but over the negative impact that unfair or unrealistic interactions can have on training or analysis.

The intent of a virtual simulation is to provide stimuli to the soldiers that will cause them to react in the same way they would on a real battlefield. The primary stimulus that is provided by almost all virtual simulators is visual. The ability to provide a realistic visual scene is both the most important and most difficult task of a virtual simulator. The device that performs this task is usually referred to as the computer Image Generator (IG).

The IG combines information about the state and activity of all of the entities in a simulation with terrain and other database information to construct the visual scene that is presented to the human operators via the displays or monitors in the simulator. In most simulators the IG is the principal means by which the human receives information about the synthetic environment. It is not possible for any IG to provide a real-time rendering of all of the complex elements of a real-world visual scene. Thus IG-produced visual scenes are, filtered and stylized representations of the real world. The task facing the IG is to render a scene that contains all of the visual elements that are required for: (1) the simulation task (training, analysis, mission rehearsal or others), and (2) the operation of the system being simulated, which could include ground vehicles, helicopters, aircraft or dismounted soldiers. IGs require specially constructed visual databases to provide the raw data from which each scene is constructed. Differences in the approaches taken by different vendors to IG and database development can result in different renderings of the same visual scene. This is the basis of the problem that this paper investigates. The question that arises is whether two simulators using different IGs in a DIS exercise can interact in such a way that neither simulator has an unrealistic advantage due solely to the differences in the visual scenes presented by the IGs. In the terminology introduced above we can ask, "Can heterogeneous IGs provide common visual scenes that support a fair fight in a distributed simulation exercise?"

It has been hypothesized that the only way to ensure a fair fight, with regard to the visual scene, in a distributed exercise is to mandate that all of the simulators in the exercise employ the same (homogeneous) IG. If this hypothesis is correct, it has significant implications for the future development of simulators for the Army. Therefore, the purpose of this paper is to test this hypothesis.

We first considered what conditions are necessary for a fair fight to take place. The basis requirements are that all human participants should be able to see and identify all of the critical elements of the visual scene, that these elements should be correctly located in the scene relative to one another, and that no scene should contain significantly more or less information than another. Thus, to ensure a fair fight between two simulators both IGs must provide:

Our preliminary analysis of the issues showed that even using homogeneous IGs does not ensure a fair fight. Asymmetric visualization between two identical IGs can occur if the range between them is great enough that the visual scenes they are viewing are at different levels of detail (LOD). It is shown in the study that the use of homogeneous IGs may be a necessary condition for a fair fight but it is not a sufficient condition. Based on the preliminary analysis, the purpose of the study was defined as shown below.

Investigate the issues associated with providing a fair fight in a DIS exercise from the perspective of the visual representation of the synthetic environment to answer the following questions:

  1. Is the use of homogeneous IGs a necessary condition to ensure a fair fight in a DIS exercise?
  2. Given that the use of homogeneous IGs is not a sufficient condition to ensure a fair fight, what other conditions are required for sufficiency?
To answer these questions it was necessary to first describe the process in which the visual scene is rendered by an IG. Section 2 of the paper provides an overview of the data and processes involved in the creation of the visual scene by an IG. This section discusses the three main elements in the development of the visual scene: the source data, the visual database, and the IG runtime rendering schemes. The purpose of this section is to describe the current state-of-the-art in image generation and to illustrate how and why the visual scene is likely to differ from IG to IG. This section provides the basis for the determination of the necessary and sufficient conditions for a fair fight.

The key findings of this study are summarized below.

We conclude that unfair fights will be a fact of life in distributed exercises for some time. The impact of this situation will vary from exercise to exercise depending on a number of factors including: whether homogeneous or heterogeneous IGs are used, the complexity of the visual scenes (terrain, features and number of models), engagement ranges, and the number of dissimilar systems being simulated (dismounted infantry, vehicles, helicopters, or aircraft). For some small number of exercises, unfair fights may be avoided by using homogeneous IGs at a single LOD. For the majority of exercises the focus must be on mitigating the negative impact of those unfair fights that occur. The primary means for reducing the effects of unfair fights are through exercise control (e.g. re-instantiating unfairly killed systems) and after action reviews.

Long term efforts to require adherence to standards for database and IG development have a promise of reducing the incidence of unfair fights, although the potential for unfair fights will exists as long as IGs operate at multiple LODs.

SECTION 1

INTRODUCTION

1.1 BACKGROUND

In this era of declining military budgets, the United States Army has decided to make a substantial investment in simulation. In particular, the Army is using the capabilities provided by the Distributed Interactive Simulation (DIS) standards to link virtual, constructive and live simulations [1] on a common, synthetic battlefield for training, test and evaluation, course of action analysis, and mission rehearsal. The advantage of linking dissimilar simulations together is that it makes it possible to represent many battlefield phenomena that would be difficult to represent in a single type of simulation. For example, the use of virtual simulation allows the introduction of the soldier's decision making process, with all of its variability and unpredictability, into simulation exercise. The disadvantage is that it is now more difficult to ensure that all of the players in a simulation are able to interact with each other and their environment consistently and realistically. One particular concern is that no player in a simulation should gain an advantage over other players through unrealistic interactions caused by the differences in simulations or simulators. This is often referred to as the "fair fight"; issue, although the concern is not really over fairness but over the negative impact that unfair or unrealistic interactions can have on training or analysis.

The intent of a virtual simulation is to provide stimuli to the soldiers that will cause them to react in the same way they would on a real battlefield. The primary stimulus that is provided by almost all virtual simulators is visual. The ability to provide a realistic visual scene is both the most important and most difficult task of a virtual simulator. The device that performs this task is usually referred to as the computer Image Generator (IG).

The IG combines information about the state and activity of all of the entities in a simulation with terrain and other database information to construct the visual scene that is presented to the human operators via the displays or monitors in the simulator. In most simulators the IG is the principal means by which the human receives information about the synthetic environment. It is not possible for any IG to provide a real-time rendering of all of the complex elements of a real-world visual scene. Thus IG-produced visual scenes are, filtered and stylized representations of the real world.[2] The task facing the IG is to render a scene that contains all of the visual elements that are required for: (1) the simulation task (training, analysis, mission rehearsal or others), and (2) the operation of the system being simulated, which could include ground vehicles, helicopters, aircraft or dismounted soldiers. IGs require specially constructed visual databases to provide the raw data from which each scene is constructed. Differences in the approaches taken by different vendors to IG and database development can result in different renderings of the same visual scene. This is the basis of the problem that this paper investigates. The question that arises is whether two simulators using different IGs in a DIS exercise can interact in such a way that neither simulator has an unrealistic advantage due solely to the differences in the visual scenes presented by the IGs. In the terminology introduced above we can ask, "Can heterogeneous IGs provide common visual scenes that support a fair fight in a distributed simulation exercise?"

It has been hypothesized that the only way to ensure a fair fight, with regard to the visual scene[3], in a distributed exercise is to mandate that all of the simulators in the exercise employ the same (homogeneous) IG. If this hypothesis is correct, it has significant implications for the future development of simulators for the Army. Therefore, the purpose of this paper is to test this hypothesis.

We first considered the necessary conditions for a fair fight to take place. The basic requirements are that all human participants should be able to see and identify all of the critical elements of the visual scene, that these elements should be correctly located in the scene relative to one another, and that no scene should contain significantly more or less information than another. Thus, to ensure a fair fight between two simulators both IGs must provide:

The first two factors listed above are primarily determined by the extent to which the visual databases employed by the IGs are correlated. The third factor is affected both by the correlation of the databases and by the manner in which the data is manipulated by the IG at runtime to render the visual scene. Thus to determine whether two IGs can participate in a fair fight, it is necessary to examine the impact of differences in databases as well as the differences in the operation of the IGs.

1.2 PROBLEM STATEMENT

Our task is to examine the null hypothesis that a distributed simulation exercise requires the use of homogeneous IGs to provide visual scenes that support a fair fight. After some initial study, we determined that this is not an appropriate statement of the problem because, as is shown below, the fact that two simulators have exactly the same IGs is no guarantee that all of the visual conditions for a fair fight, specifically symmetric visualization, will occur.

Asymmetric visualization between two identical IGs can occur if the range between them is great enough that they are viewing the visual scene at different levels of detail (LOD). IGs will render the portion of the visual scene closest to the simulator at the highest possible LOD. However it is almost impossible for an IG to represent the entire scene out to the limit of visibility at this same LOD. Therefore, all IGs utilize some technique for lowering the LOD in the scene as the distance from the simulator's location increases or as the scene becomes increasingly complex (due, for example, to an increased number of moving models). These techniques are applied at runtime based upon the load on the IG. To illustrate how changing the LOD can cause asymmetric visualization between homogeneous IGs, consider the following example:

The discussion above indicates that the use of homogeneous IGs is not a sufficient condition for a fair fight.[4] The questions that remain to be answered are: whether the use of homogeneous IGs is a necessary condition for a fair fight, and what are the conditions required for sufficiency. Given this, the problem addressed in this paper can be stated as:

Investigate the issues associated with providing a fair fight in a DIS exercise from the perspective of the visual representation of the synthetic environment to answer the following questions:

1. Is the use of homogeneous IGs a necessary condition to ensure a fair fight in a DIS exercise?

2. Given that the use of homogeneous IGs is not a sufficient condition to ensure a fair fight, what other conditions are required for sufficiency?

1.3 METHODOLOGY

The initial task is to develop an experimental plan for testing the null hypothesis that a homogeneous IG environment is required to support a fair fight in a DIS exercise. At the beginning of the study it was thought that it would be possible to define a set of IG performance parameters that could be used to compare IG performance to determine if two (or more) heterogeneous IGs could participate in a fair fight. Therefore, the original approach was to:

  1. Define a set of IG performance parameters.
  2. Design a collection process for assigning values to these parameters for a selection of IGs.[5]
  3. Design a process for comparing these parameters to predict whether two IGs could interact in a fair fight.
  4. Test the above process on a selected pair of IGs.
The methodology described above can be termed a bottom-up approach because it focuses on the performance of the IGs at their lowest level. The advantage of this type of approach is that, if a common set of metrics can be defined, the results obtained should be generally applicable as long as the hardware configuration of the IG remains constant. Unfortunately, this approach proved to be unfeasible for two reasons. First, because there is no standardization between vendors in the way they describe their system performance, it was not possible to define a set of performance parameters (metrics) that could be equally applied to all IGs. Secondly, and more importantly, even identical performance between two IGs is no guarantee of a fair fight - as was shown above.

Next, we considered an approach for comparing IGs described in a paper by a group at Loral Advanced Distributed Simulation (LADS). [6] Our version of this approach called for performing a qualitative and quantitative analysis of the visual scene provided by a simulator traversing through three scripted courses in the same visual database. The three courses would be designed to provide low, medium and high levels of static environment complexity. While traversing each course the IG would be required to deal with an increasing level of dynamic complexity over time. An example of the static and dynamic environments to be used is below.

The experimental plan called for two IGs to be run over each course simultaneously while the rendered scene was observed by subject matter experts (SMEs). Qualitative (subjective) comparisons of the visual scenes would be made by the SMEs to assess the realism of the scene under various IG loading conditions. The scenes would be data logged to allow the SMEs to review them as often as necessary. At the same time IG performance data would be collected to support a quantitative comparison of the IGs. The plan was to select one of the candidate IGs as the baseline (the CCTT IG was selected) and to compare each of the other IGs against it in a series of tests.

The methodology described above could be considered a top-down approach since it focuses on the top-level IG product - the final rendered scene. The advantage of such an approach is that it provides a measure of the tactical realism and utility of the scene provided by each IG. The disadvantage is that the results would only be valid for the visual database and conditions actually tested and it would probably not be feasible to test enough conditions to ensure that the comparisons would be valid under all, or even most, conditions. The second problem is that even if two IGs could be shown to be comparable in terms of the realism of the visual scene and system performance there still is no guarantee of a fair fight if there is asymmetric visualization as a result of differing LODs.

Despite the problems with this approach it was decided that it provided sufficient opportunity to gain a better understanding of the issues associated with the use of heterogeneous IGs to warrant going ahead. Unfortunately, when we tried to implement this approach we ran into insurmountable difficulties. First, we learned that developing a common visual database for four different vendor's IGs would be a major task well outside the scope of this project. (The issue of the lack of commonality of visual databases across IGs has a major impact and is discussed in detail in the next section.) Since it was not possible to conduct an experiment to compare different IGs, we decided to run the experiment with a single IG, the CCTT IG, to collect baseline data for future comparison and to test the test. Our plan was to obtain one of the first CCTT IGs to be delivered and to test it at the simulation facility at the Institute for Defense Analysis (IDA). However, delivery of the CCTT IGs was behind schedule and the Program Manager was unable to provide IDA with an IG for testing.

With our options for performing an experiment-based study exhausted we looked at alternatives for completing the study. In the process of developing and attempting to implement the approaches described above we had learned a number of lessons that are directly applicable to the issue of using heterogeneous IGs in distributed simulations. We decided to conclude the study by documenting these lessons and, in the process, answering the two questions listed in the Problem Statement in the previous section.

1.4 OVERVIEW OF THE PAPER

Section 2 provides an overview of the data and processes involved in the creation of the visual scene by an IG. The purpose of this section is to describe the current state-of-the-art in image generation and to illustrate how and why the visual scene is likely to differ from IG to IG. This section provides the basis for the determination of the necessary and sufficient conditions for a fair fight.

Section 3 lists the necessary and sufficient conditions for a fair fight given the current state-of-the-art as described in section 2. The main finding in this section is that the necessary conditions for a fair fight are overly constraining to the development of IGs and the conduct of distributed exercises. The remainder of the section describes the impact of these necessary conditions and describes alternatives for the both the near and far terms.

Section 4 summarizes the main findings of the study and lists our conclusions and recommendations.

SECTION 2

DATA AND PROCESSES IN THE DEVELOPMENT OF COMPUTER GENERATED IMAGERY

2.1 BACKGROUND: VISUAL DATABASES

The visual scene is rendered from the data in the visual database by the graphical processes running in the IG. The visual database is formed from raw source data by database generation system software associated with the visual database format. To provide a fair fight across a distributed network of simulators requires a high degree of commonality in the visual scene being presented each simulator. The degree of commonality of this visual scene across the entire networked simulation framework is one of the key issues affecting interoperability and fair fight between IG platforms.

Currently, there is little commonality of the visual scene across simulator networks because:

This section will provide background information on the elements of the visual scene rendering process and will illustrate how differences in the visual scene can arise even between homogeneous Igs.

2.1.1 Summary of Source Data Formats

Raw source data is found in a variety of formats and resolutions, each of which may be used in constructing the database. The source data currently being used in the development of DIS synthetic environment databases are categorized as vector, raster, and 3-D model data for digital products in the Mapping, Charting, Geodesy (MCG) field, and as spatial data exchange formats that are designed to provide a mechanism for data sharing between systems and organizations.[7]

Vector data formats are associated with spatial data that represents points (a tree), lines (a road), and/or aerial features (a lake or a forest), and their respective attributes. These formats provide the primary source of terrain feature information for the development of DIS synthetic environment databases. Examples of vector data formats are:

Raster or matrix data formats represent spatial data in the form of a two-dimensional array (rows and columns) of regularly spaced elements or values such as pixels or terrain elevation post spacings. The simulation community uses these formats for the distribution and exchange of terrain elevation data, geospecific imagery, texture patterns, and map background display information. Some examples of the raster/matrix formats are: Three-dimensional model formats are used to exchange description and size information about objects between computer-aided design (CAD) packages and other similar applications. Two examples of these are: The key point is that for each type of source data required for the visual database (feature, elevation and model) there are at least two choices of format to use. In fact there are often more than two choices since some of the formats, such as DTED, have multiple levels of resolution. The choice of the source data format to be used is determined by a number of factors including the availability of data for the specific portion of the world to be modeled, and the compatibility of the data with the visual database format selected (which, in turn, is determined by the choice of the IG). Thus one developer may choose one set of source data to model a given piece of terrain while another developer could choose a different set for the same terrain based on different needs of the IGs each developer is working with. This is the beginning of the problem of incompatibility of the visual scene across heterogeneous IGs.

2.1.2 Summary of Visual Database Formats

Currently IGs are developed to operate with a specific visual database format. Therefore, selection of an IG implies the selection of a format for the visual database. Associated with each database format are a set of programming tools that define the database generation system and that are used to import source data into the visual database. The main database formats and their database generation software are described in this section.

Database formats provide the basis for coordination of database development catered to the specific IG hardware being implemented in the simulated exercise. Simulation or flight formats are used as database development and exchange tools which define the virtual environment and are characterized by their support for polygonal representation of terrain and objects. These formats were initially designed to support IG systems for the first aircraft simulator databases and did an acceptable job at portraying the real-world through the visual systems of independent IGs in the past. Currently, the desire for interoperable, distributed networked simulators all participating in the same simulated environment, has forced the formerly independent IGs to play on a similar battlefield. The database formats most commonly used by the DIS community are:

Section 3 discusses eleven key characteristics of visual databases. The most important of these characteristics for the purpose of interoperability are metadata and topology. Metadata provides a data definition describing the meaning of other data. This allows the data user to determine the contents of the data represented (i.e., where, when, how, and by whom the data was created, and how it relates to the real-world). Metadata in the form of data element counts and other similar header information, can be vital in allowing the contents of a data transmittal to be accessed easily and efficiently. Topology forms the basic framework for database interoperability. It describes adjacency, connectivity, and inclusion relationships between the database feature primitives (nodes, edges, and faces). Topology is an essential step in navigating from a particular geometric primitive (a brush-covered field) to one or more adjacent primitives (a forest or black-top highway). The three database formats listed above are described in more detail below with an emphasis on their support for metadata and topology.

MultiGen is a visual simulation database toolset developed and sold by MultiGen, Inc. which operates on Silicon Graphics workstations, and which is used throughout DIS to support synthetic environment database development, as well as for 3-D model support for use on other IG systems. OpenFlight is MultiGen's binary database format, which defines geometry, attributes, and relationships among the elements which make up the visual database. An OpenFlight database consists of a hierarchical structure of nodes, called beads, which support polygonal representations of the terrain surface, terrain features, and models of objects such as vehicles, trees, and buildings. OpenFlight format contains a very limited amount of metadata, which includes the database's revision number, date and time of last update, the database extent, and the origin of the local Cartesian coordinate system. OpenFlight does not support topology.

The S1000 Database Application Programmer's Interface (API) is a library interface for accessing S1000 visual database elements and is used in conjunction with a database produced using the S1000 Virtual World Design Database Toolset. The S1000 API is currently supported for HP, SUN, and SGI workstations running UNIX operating systems. It was originally developed to support the Army's SIMulation NETwork (SIMNET), but is still being used to develop synthetic environment databases for DIS simulations. S1000 API metadata is limited to the geographic extent information contained in each visual database. Topology is supported as both networks or pole-sets. S1000 Networks represent features such as roads and rivers that are made up of segments (edges) and nodes. Each segment explicitly references its associated start and end nodes. S1000 Pole-Sets represent independent simple features such as treelines and canopies, in which individual coordinate points are accessible only in a sequential manner. Similarly, the points making up defragmented terrain area features are accessible only sequentially. Networks and Pole-Sets contain only simple line and area features and are represented in 2-D world coordinates.

The Standard Simulator Data Base (SSDB) Interchange Format (SIF) was created to facilitate the sharing of flight simulator databases using the Simulator Data Base Facility (SDBF). The SIF format has been used to distribute databases to the DIS community to support I/ITSEC interoperability demonstration. SIF provides extensive metadata, including identification, security, lineage, accuracy, and data directory information at multiple levels. The SIF database header files for each model, culture, and gridded data type, provide identification, currency, and security for an entire SIF database, as well as for individual models, culture tiles, terrain tiles, and texture images. It also contains directory information which allows individual component files to be located. SIF also provides topological support for model and culture data. SIF polygonal models are represented in a data structure in which each polygon explicitly references a set of three or more vertices, and vertices can be shared by multiple adjacent polygons. However, the polygons are not accessible from the associated verticies, edges are not guaranteed to meet only at vertices, and there is no guarantee that verticies are not duplicated. SIF represents cultural features using a planar graph (Level 2) data structure, which is similar to DMA's Standard Linear Format (SLF), with additional constraint that segments must be split whenever they are intersected by another segment, and wherever a point feature is located on a segment. Segments (edges) and coordinates that are shared by multiple features are stored only once. Nodes are not explicitly represented. Topology is only supported within, but not across culture tile boundaries.

2.1.3 Development of the Visual Database

Creation of a visual database using any of these formats is accomplished by using various processes which are supported by algorithms found in the database generation system software. The process of creating the elements of the visual database adds to the lack of commonality in the final visual scene by introducing human variability . The database generation process from vendor to vendor however, does follow similar steps in order to develop the database. The common steps and associated issues with the processes that a database developer uses to build the visual database are as follows: [8]

  1. Import of gridded elevation data. It is important to note here the number of different sources of terrain data which are available as source elevation data for the database. Currently, DMA's Digital Terrain Elevation Data (DTED) is the primary source of terrain elevation data that provides landform, slope, elevation, and gross/terrain roughness in a digital format and is specified relative to some horizontal and vertical datum. DTED Level 1 data has a grid spacing of 100 meters and DTED Level 2 data, in limited availability, is found in 33 meter resolution. Other sources of terrain data are the United States Geological Survey (USGS) Digital Elevation Models (DEMs), SIF gridded data, and certain elevation data sets developed from other National-level assets.
  2. Polygonization of gridded elevation data to form a continuous 3D polygonal terrain surface. The basis for many of these polygonization algorithms is the Delaunay triangulation, a 2D triangulation that can be used to create 3D surfaces by simply adding the elevation (z) component to each 2D vertex upon completion of the 2D triangulation. One vendor indicated using this algorithm in a constrained mode which accounted for terrain roughness (i.e., pre-defined peaks and pits in the final triangulation) and incorporation of 3D polygonal road and river features (i.e., cut/fill operations to ensure that roads are level or appropriately banked and that rivers flow downhill. Although there is one major algorithm used by the database generation systems for terrain polygonization, variability still occurs due to vendor modifications and implementations of the algorithm catered to their own database.
  3. Import of 2D vector culture data. DMA's Digital Feature Analysis Data is the standard vector representation for terrain feature data. It is found in four different formats: Level 1, Level 2, Level 1-C, or Level 3-C.
  4. Polygonization of 2D vector lines and aerial culture data to form a set of 3D culture features overlying orfractured over the 3D polygonal terrain surface created in step 2. There are numerous algorithms for manipulating the terrain and cultural features. The two following examples of terrain and culture importers were taken from software descriptions found in a paper by PAR Government Systems:[9] .
  5. Instantiation of 3D polygonal models on top of the 3D polygonal terrain surface as replacements for 2D vector culture point features. This is normally a very simplistic process, involving only the location (x, y, z) and rotation (heading) of the model. Manual intervention is needed mainly to correct models that end up on too steep a slope and cut or interpenetrate into terrain (i.e., Z-buffer systems cannot render interpenetrating polygons properly) or that have edges which lie on terrain polygon edges, presumably because of an IG polygon rendering priority algorithm.
  6. Various manual terrain database modeler operations to fix problems incurred in steps 1 - 5 and to enhance the database . Database format design is directly tied to the database generation software used to develop the database. After the terrain and cultural features have been merged, the database modeler has to manually or fix discrepancies between the terrain database and the real world. The human-in-the-loop situation lends itself to the next issue with using the database software. The database generation software only provides a general set of tools, which allow the developer to construct the various synthetic environment requirements which will increase realism. These general-purpose mechanisms, while allowing creativity in database development, allow too much flexibility in the design of the database. This flexibility provides the developer with too general a methodology or set of rules regarding database creation. To assure cognitive visual recognition with other vendor-created databases of the same synthetic environment, elevation, culture and object feature definition would need to be so highly detailed that requirement documentation for the visual scene statement-of-work would be endless. One person's definition of what the environment (a tree, a road, a tank ditch) looks like might be totally different to another's interpretation of the same area. Detailed database descriptions are important but the fundamental problem here falls back onto the software. Flexibility in the database generation software provides the database modeler with more creativity in constructing his/her own interpretation of the visual in order to make the database look more realistic. This individual interpretation is what precludes database interoperability.
The database interoperability problem lies with understanding the basics of vendor-specific database generation processes for development and implementation. The large amount of variability in visual database development accounts for many of the problems associated with distributed simulation exercises using different format databases. The question of "how" has the source data been implemented, and at what resolution "needs" to be easily readable and accessible throughout the database generation software.. This is essential for cross-platform database development and interoperability. If the software was more structured to follow a common development path with specific steps, visually similar databases might be feasible. The question here is one of "Why should a vendor care if their database looks visually similar to a database developed on another vendor's software, what incentive is their for vendors to spend research and development money changing their database generation software so that their database becomes more like someone else's?" The importance of interoperability must be stressed to the database generation system vendors. Partial fixes and implementations offer immediate solutions, but there needs to be a long term commitment to the interoperability issue.

2.2 BACKGROUND: IG RUNTIME RENDERING OF THE VISUAL SCENE

Our description of database rendering up to this point has been more of an offline or static portrayal of how the database generation system would model the visual scene. Once the database has been developed, the IG then has to take this synthetic view of the environment, and provide a mechanism for fluid or seamless movement throughout the visual scene. The IG runtime rendering schema take the interim database information and construct terrain polygonization, conform the cultural features to the terrain skin, and provide a decision-making mechanism for polygonal rendering, including Level of Detail (LOD) management, texturing, and range buffering.

To understand the concept of IG runtime rendering, consider the following example. Two different IG platforms simulating M1A1 tanks are immersed in the same synthetic database environment developed using the same raw data sources and formats. In this scenario, both Tank A and Tank B are geographically placed on the same location on Hill #14, each facing and overlooking the Unadilla Valley with Highway 58 and the Unadilla River meandering below, and the distant rolling hills of the Chenango Range, visibility is 25 km. This static visual scene, with no moving models present, is visually the same for both tanks. Now, from right to left, heading north on Highway 58 is a column of five, T-80 tanks. Both IGs have displayed the column with no discernible differences in Tank A's or Tank B's visual scene. From the south, a HIND helicopter rocketship enters the valley and takes a position 300 meters in front of the moving T-80 column. Each visual scene, in Tank A and Tank B has changed drastically. In Tank A, the T-80 column and HIND are clearly visible traversing through the database, but the rolling hills in the background have become covered in haze, and the visibility has dropped to 10 km. In Tank B, the vehicles and helicopter are also clearly visible, the horizon and rolling hills are visible, but the trees, shrubs, and tall grass in the foreground have lost their texture and look like blotches of green and brown, instead of the discernible pine/maple trees and scrub brush features they once were.

This example shows the effects of IG runtime rendering. In both cases, the significant part of the visual simulation (i.e., the tanks and the helicopter) were preserved in full resolution. The IGs on the other hand found that the addition of the HIND into the visual scene was causing a taxing overload to the system. To compensate for this, the IGs each performed scene management techniques on their visual scenes. One IG decreased the visibility with haze, so that the distant terrain and features in the background didn't have to be rendered, thus allowing their performance values to be used on the moving models. The second IG also used scene management, but eliminated foreground terrain and features because it felt that the background was more important to the simulation. Runtime rendering schema allow the IG to compensate for undesirable overloads on the system which will affect the visualization.

The individual IG system is composed of graphics hardware which manipulates the data-bits in order to display the database on the visual display system. IG vendors live and die with the hardware design of their IG platform and how that hardware renders the visual scene. Descriptions and details of the vendor hardware is often proprietary, but if the vendors highlight what and when, not necessarily how, certain aspects of the database are rendered during runtime, other vendors may begin to think of interoperability and fair fight between IGs in the same terms. Without this common knowledge of what and when, similar terrain and feature data from the database will be rendered at different times on different IG platforms, thus affecting the realism of the simulation. This large amount of variability in IG runtime processes, when coupled with the inconsistencies of database generation lends itself to major differences is the final visual scene.

2.3 INTEROPERABILITY OF THE VISUAL DATABASE WITH OTHER SIMULATION DATABASES

It is important to remember that most visual simulation formats were developed for system dependent applications. Visual databases were initially developed for independent operations and thus were not intended to support distributed exercises. Therefore, significant cross-platform characteristics such as metadata and topology have very limited support from the database generation software. Incompatibility issues with other types of databases (i.e., Semi-Automated Forces (SAF) ), which rely heavily on metadata and topology, become more evident in distributed exercises. The list of database generation processes discussed in Section 2.1.3, should include the following as a last step. Once the database has been created, tweaked, and considered acceptable, the last step in database development should be the creation of metadata and topology from the final database. Some database generation system vendors support metadata and topology but in very limited, different ways. Metadata and topology are the key characteristics to interoperability, not only across IG platforms, but also with non-visual simulators which are all part of the virtual environment.

MultiGen's OpenFlight does not support topology, while LADS S1000 API supports level 0 (Pole-Sets) and level 1 (Networks) topology, and SIF supports level 2 (Planar Graph) topology for features. The differing level of topological support causes major problems when attempting interoperability between platforms. An example of this is as follows: One database represents a narrow body of water to be a fordable stream, while another database represents the same body of water to be a narrow lake. The topological characterization of the first database allows for vehicles to navigate across the feature, but in the second situation, the vehicles would drown if they attempted the crossing. This problem is more evident when dealing with a non-visual database like one used by Semi-Automated Forces (SAF). None of the current database formats support anything beyond level 2 topology, and SAF needs to use a database that supports terrain reasoning which is found in level 3. Therefore, on a similar, simulated battlefield environment, there is no way of knowing exactly how SAF vehicles and simulator vehicles would interact or be affected by different levels of topology.

The purpose of a visual database format is to allow an IG hardware platform a means of recognizing and interpreting the data types which form the database. In a perfect world database interoperability across heterogeneous platforms would be very simple. Raw data formats and resolution and the steps required to build the initial database would be the same. This would allow terrain elevation measurements (i.e., crest of a hilltop) and feature locations and descriptions (i.e., a 55' pine tree) to be perfectly uniform across databases. As was discussed previously, the raw data and the variations of the resolutions of that data which are available now, cause today's simulated world to be far from perfect. Elevation postings and feature data are currently rendered differently per database generation system. These differences again foster the interoperability problem.

The current visual database formats, do however, provide a very robust mechanism for developing a tactical database environment catered to a specific IG platform. The problem arises when the user tries to take this very robust, system (IG) dependent database, and attempts to share it with another, heterogeneous IG platform. The first step in this sharing process is to transfer the initial database into a format which can be used by another IG platform. This transfer causes a degradation of the initial database due to the interchange format's need to breakdown the initial database into features which can be translated into the next database format. The database developer on the second IG platform then takes the transferred database format and rebuilds and tweaks the particular features that were lost in the transfer. Database development is not mainstreamed and currently is focused with the individual developer. Unless you only have one developer reconstructing the database, his/her synthetic interpretation of the real world will be different. Therefore, even though the two databases are of the same terrain, individual features will be visualized differently, affecting the desire for symmetric and cognitive visualization.

SIF's purpose was to be interoperable for networked simulation. Several evaluations of SIF format have been previously published, identifying significant limitations of the format relative to DIS community requirements, including: use of compressed ASCII encoding for most SIF files, the lack of spatial organization within the culture and terrain tiles, the unpredictable ordering of features within culture tiles, the lack of support for full (Level 3) Topology, the lack of explicit support for polygonal representations of terrain, and too much flexibility, putting too great a burden on database consumers. This SIF format has been used to distribute databases to the DIS community in support of the annual I/ITSEC interoperability demonstrations for the past several years, and is the current de facto standard for DIS synthetic environment database interchange.[10] This shows the complexity of the networked simulation environment issue, compared with the formats of the databases being used to describe them. SIF was specifically developed to support networked simulation and significant issues still remain unresolved.

The Synthetic Environment Data Representation Interchange Specification (SEDRIS), a joint effort of ARPA and STRICOM, is attempting to address the simulated environmental exchange problem. The specification is a neutral interchange mechanism incorporating the various database issues of data representation, feature data dictionaries, and access languages (APIs) into a specification which forms a single, coherent, integrated representation of the environment9. The importance of this is that it allows all data formats to become accessible and interchangeable, while also becoming upgradable. The current issue with standards and formats is that as soon as these become available, they are already outdated and ineffective for the simulation community. SEDRIS allows the use of different data formats as long as it conforms with the data model.

SECTION 3

SUMMARY OF THE NECESSARY AND SUFFICIENT CONDITIONS TO ENSURE A FAIR FIGHT AND THE IMPLICATIONS FOR THE USE OF HETEROGENEOUS IGS IN DISTRIBUTED EXERCISES

The discussion in the preceding sections can be summarized very simply. The sufficient conditions to ensure a fair fight in a distributed exercise are:

  1. That all IGs in the exercise use the same visual database.
  2. That all IGs in the exercise render the visual scenes using the same runtime schemes.
  3. That all visual scenes are rendered at the same LOD.
Given the current state-of-the-art in computer image generation, the necessary conditions are:
  1. Use of homogeneous IGs with identical visual databases by all simulators in the exercise.
  2. Specification of a single LOD at which the exercise will be conducted.
The two necessary conditions listed above are unacceptable because they overly constrain the development of simulators and the conduct of distributed exercises. The impact of imposing these conditions, and some possible alternatives, are discussed in the remainder of this section.

3.1 THE IMPACT OF REQUIRING HOMOGENEOUS IGS AND VISUAL DATABASES

The requirement to conduct distributed simulation exercises using homogeneous IGs is unacceptable for at least three reasons:

Section 2 described the close coupling between the visual databases and IGs. To ensure sufficient commonality of the visual scenes for a fair fight will require that the same visual database be used by the common IG. As discussed, given the current state-of-the-art this will require not only that the same source data, and database format be used but also that the final tweaking of the visual database be identical. This will have the effect of centralizing all database development at a single vendor.

3.2 THE IMPACT ON DISTRIBUTED EXERCISES OF SPECIFYING A SINGLE LOD

Whenever two simulated entities view each other at different LODs the potential for asymmetric viewing, and an unfair fight, exists. The necessary condition for a fair fight listed above, to specify a single LOD for a distributed exercise, is usually too great a constraint to be acceptable. In almost every case, the range of sensors and weapon systems exceeds the range at which IGs are capable of providing a high LOD. Currently, there are two alternatives to dealing with the LOD problem. The first is to specify a LOD that is low enough so that the IGs in the exercise will be able to render this LOD at acceptable ranges.[11] The second is to accept that the conditions for unfair fights will exist and to mitigate their impact. These two alternatives are discussed below.

Specifying a single LOD may be a viable alternative for exercises that primarily involve like systems, for example, ground, mechanized vehicles. Although realism may be lost by requiring a lower LOD, at least the impact of the reduced realism should be roughly the same for all participants. This is not the case for exercises involving dissimilar systems. Consider what happens if you mix dismounted infantry into a scenario with mechanized vehicles. The impact of specifying a reduced LOD will be much greater on the infantry, which depends on relatively small terrain and cultural features for cover and concealment, than on the much larger mechanized vehicles. The problem is choosing a LOD that allows the infantry to find adequate cover and still supports infantry-armor/armor-infantry engagements at the maximum range of their weapon systems. (Recall that the infantry has the capability of engaging armor at significant ranges using both direct fire from the TOW missile as well as artillery).

A better alternative may be to accept the fact that asymmetric visualization is going to occur and to attempt to reduce its impact. This is what is currently done in distributed exercises, either intentionally or by default. The potential for asymmetric visualization will vary greatly from scenario to scenario. In relatively open, featureless terrain the number of cases in which small differences in the visual scene will have an impact on an engagement may be few. It may be possible, through careful exercise monitoring and control to reduce the negative impacts of an unfair engagement by, for example, re-instantiating an entity that was destroyed in an unfair fight. At the very least, careful after action review (AAR) should be conducted to reduce the negative impact on training or data collection caused by an unfair fight. This task becomes more difficult as terrain complexity and number of moving models in a scenario increase and unfair fights become more likely.

Of the two alternatives discussed, the second, accepting asymmetric visualization and mitigating its effect, is more appealing. The loss of realism incurred by limiting all players to a single LOD would appear to have a much greater negative impact on training and analysis than occasional unfair fights.

3.3 ALTERNATIVE APPROACHES TO SATISFYING THE NECESSARY CONDITIONS FOR A FAIR FIGHT

What can be done to ensure a fair fight in a distributed exercise without incurring the unacceptable consequences just described? Unfortunately, in the near term, very little. Even large simulation systems such as CCTT which can conduct large distributed exercises with (nearly) homogeneous IGs[12] will still have fair fight problems whenever simulated entities view each other at different LODs. In the long term, the goal of achieving a common visual scene across dissimilar IGs in a distributed exercise will only be achieved through a concerted effort by the entire modeling and simulation community. Some examples of the types of efforts required are discussed below.

The modeling and simulation community has little control over the development of source terrain and feature data. Priorities for the development of accurate and detailed source data will often be set by other groups of users. Visual database developers will have to take what is available in terms of source data. Therefore the focus must be on the data developer's visual database formats.

The DIS Terrain Data Format Study dated 12 May 1995 prepared for the U.S. Army Topographic Engineering Center by PAR Government Systems Corporation lists eleven key characteristics of terrain data formats. The purpose of this study was to identify the degree to which current and emerging database formats support these eleven characteristics. These eleven characteristics can provide the basis for a set of standards for future database format development. Briefly, the key characteristics are:

The finding of the terrain format study is that none of the current or emerging database formats provide strong support for all of the characteristics listed. The study also provides indications of where and how formats can be improved to better support these characteristics. The development of a set of standards for visual database formats based on these characteristics can ensure future commonality between visual databases for dissimilar IGs.

Database standardization is only part of the solution. As described in section 2, differences in the runtime rendering performed by different IGs can result in different visual scenes being produced from the same visual data bases. Standardization of runtime rendering schemes is a necessary part of providing common visual scenes and a fair fight across a distributed network of heterogeneous IGs. Standardization may be more difficult to achieve in this area because of the proprietary nature of IG development. IG vendors are in competition to provide the most powerful IGs at the lowest costs. Since new and faster processors are equally available to all vendors, improvements in IG performance often come through increased performance of the rendering algorithms. Therefore, it is not in a vendor's interest to share its processes with other vendors. What will be required is an insistence by users on adherence to a set of standards for rendering algorithms. Users in this case refers to more than just the military modeling and simulation community. Military simulation is only a relatively small part of the market for IG vendors. There are many other users of emerging visualization technology both within and outside the government. The modeling and simulation community must receive support from other users for standardization if sufficient pressure is to be brought to bear on the IG industry. It is not clear how much support will be forthcoming from a user community that does not share the problems associated with distributed exercises discussed in this paper.

SECTION 4

CONCLUSIONS AND RECOMMENDATIONS

The key findings of this study are summarized below.

We conclude that unfair fights will be a fact of life in distributed exercises for some time. The impact of this situation will vary from exercise to exercise depending on a number of factors including: whether homogeneous or heterogeneous IGs are used, the complexity of the visual scenes (terrain, features and number of models), engagement ranges, and the number of dissimilar systems being simulated (dismounted infantry, vehicles, helicopters, or aircraft). For some small number exercises, unfair fights may be avoided by using homogeneous IGs at a single LOD. For the majority of exercises the focus must be on mitigating the negative impact of those unfair fights that occur. The primary means for reducing the effects of unfair fights are through exercise control (e.g. re-instantiating unfairly killed systems) and after action reviews.

Long term efforts to require adherence to standards for database and IG development have a promise of reducing the incidence of unfair fights, although the potential for unfair fights will exists as long as IGs operate at multiple LODs.


Postscript Version