...

ARMY AVIATION SITUATIONAL AWARENESS THROUGH INTELLIGENT AGENT-BASED

by user

on
Category: Documents
29

views

Report

Comments

Transcript

ARMY AVIATION SITUATIONAL AWARENESS THROUGH INTELLIGENT AGENT-BASED
ARMY AVIATION SITUATIONAL AWARENESS THROUGH INTELLIGENT AGENT-BASED
DISCOVERY, PROPAGATION, AND FUSION OF INFORMATION
Stephen Jameson and Craig Stoneking
Lockheed Martin Advanced Technology Laboratories
Camden, New Jersey
[sjameson, cstoneki]@atl.lmco.com
ABSTRACT
The Army Aviation community is promoting the development of technologies and systems that support effective on-themove command of airborne and ground-based maneuver forces through shared situation awareness and decision aiding technologies. The operational concepts for these technologies and systems are characterized by the extensive use of mobile sensing systems, unmanned platforms, and decision aiding systems in the forward elements of the combat force, with the goal of
providing mobile commanders with the improved situational awareness that results from a shared common operational picture of the battlefield. In support of these objectives, Lockheed Martin Advanced Technology Laboratories (ATL), under
contract to the Army, is combining three ATL-developed technologies, Multi-Sensor Data Fusion, Intelligent Information
Agents, and the Grapevine information sharing architecture, in an innovative way to provide the Shared Situation Awareness
capability for ongoing Army programs in this area. In this paper, we describe the three technologies, their application in current and future work, and present results of simulation testing showing the benefits of Grapevine-enabled Distributed Data
Fusion.
INTRODUCTION
The Army Aviation community is promoting the development of technologies and systems that support effective onthe-move command of airborne and ground-based maneuver
forces through shared situation awareness and decision aiding technologies. The operational concepts for these technologies and systems are characterized by the extensive use
of mobile sensing systems, unmanned platforms, and decision aiding systems in the forward elements of the combat
force. This work is exemplified by the Airborne Manned/
Unmanned Systems Technology Demonstration (AMUSTD) and the Hunter Standoff Killer Team (HSKT) Advanced
Concept Technology Demonstration (ACTD) programs, led
by the U.S. Army Aviation Applied Technology Directorate
(AATD). AMUST-D and HSKT are aimed at providing airborne warfighters and mobile commanders with the improved situational awareness that results from the cooperative construction of a shared common operational picture of
the battlefield, through the sharing and fusion of information
from all exploitable sensors and intelligence data sources
that bear on the battlespace.
In support of these objectives, Lockheed Martin Advanced
Technology Laboratories (ATL), under contract to AATD, is
combining three ATL-developed technologies, Multi-Sensor
Presented at the meeting of the American Helicopter Society
Forum 58, Montreal, Canada, June 11-13, 2002. Copyright
© 2002 by the American Helicopter Society, Inc. All rights
reserved.
Data Fusion, Intelligent Information Agents, and the Grapevine information sharing architecture, in an innovative way
to provide the shared situation awareness that is key to the
success of the AMUST-D and HSKT programs (Figure 1).
This paper describes the three technologies and their application in AMUST-D and HSKT. We also describe work that
has been done to gather data on the effectiveness and benefits of these technologies in promoting situation awareness
among a distributed team of platforms. We present the results of this work and describe resulting conclusions about
the potential operational benefits of these technologies.
Figure 1. AMUST/HSKT Shared Situation Awareness
architecture.
APPLIED TECHNOLOGIES
The Artificial Intelligence laboratory at Lockheed Martin
Advanced Technology Laboratories (ATL) has been developing Information Fusion and Situation Awareness capabilities for more than 10 years, including the Army’s successful
Rotorcraft Pilot’s Associate (RPA) program. Our recent developments in Intelligent Information Agents have given us
an additional powerful technology for information dissemination, retrieval, and monitoring on the digital battlefield.
The Grapevine Information Dissemination architecture, a
specialized application of intelligent agents, was developed
to support opportunistic exchange of relevant data between
distributed forces over bandwidth-limited data links. The
individual technologies are described in the following section.
Multi-Sensor Data Fusion
From 1993 to 1999, ATL participated in the Army’s Rotorcraft Pilot's Associate (RPA) Advanced Technology Demonstration program, sponsored by AATD. ATL developed the
multi-sensor Data Fusion system [1] that provides a common
fused track picture to the RPA pilot and the RPA decision
aiding systems. In the RPA Data Fusion system, data representing as many as 200 battlefield entities, from 14 different
types of onboard and offboard sensors, is correlated and
fused in real time into a consolidated picture of the battlespace. The RPA system, including ATL’s Data Fusion system, was successfully flight demonstrated on an AH-64/D in
August 1999. The Data Fusion system (Figure 2) consists of
four main elements.
SourceSpecific Input
Modules
JSTARS
Input
Module
Core Fusion Process
Fusion
Dispatch
Fusion
Control
Client-Specific
Output
Modules
Track Fusion
Kernel
CORBA
Output Module
MTI Fusion
Kernel
priate Kernel to process the data set. The Fusion Control
module monitors the performance and resource usage of
Data Fusion, and applies control measures such as prioritization or down-sampling of input data in cases where the
Data Fusion process begins to exceed resource limitations
(such as memory or CPU usage) or to fail to meet timing
requirements. This separation of the top-level control from
the fusion algorithms allows the Data Fusion system to be
configured readily to meet different performance and resource requirements in different applications.
A set of Input and Output modules manages the real-time
interfaces with the sensor systems and the other components
of the RPA system. The input modules read input from the
sensor systems at a sensor specific rate, using a sensorspecific input protocol, and pre-process the input using tailored sensor-specific routines into a common intermediate
input format and information content. The output modules
take the fused trackfile and output it to a client, in a clientspecified format and using a client-specific protocol. The
modular, object-oriented Data Fusion software architecture,
including the use of a common intermediate data input format, permits a single core body of Data Fusion code to be
easily adapted to multiple input and output formats and requirements, facilitating portability. On AMUST-D, two different versions of the Data Fusion system will be deployed
on the Longbow Apache and A2C2S Blackhawk helicopters.
Both versions will contain the same common fusion core,
coupled with platform- and sensor-specific input and output
modules.
A Track Management module stores all track data and
maintains the relationships among a set of track databases,
one for each sensor providing input, and a Central Trackfile
that stores the fused picture. Additional databases also provide access to a variety of data about platform, sensor, and
weapon characteristics used by Data Fusion.
Figure 2. ATL’s real-time multi-sensor
data fusion system.
A set of Fusion Kernel modules performs the heart of the
correlation and fusion processing. As with the input and output modules, the modular nature of the fusion process makes
possible the encapsulation of algorithms tailored to a specific input data type into a kernel module. As input data sets
are received, the appropriate Kernel is applied to that data
set, with the resulting output passed to the Track Management module to update the fused trackfile.
A top-level control structure, including Fusion Dispatch and
Fusion Control modules, controls the application of fusion
algorithms to the input data and ensures that the system
meets real-time and resource requirements. The Fusion Dispatch module evaluates incoming data, determines which
algorithm set—embodied in a fusion Kernel module—
should be applied to fuse the data, and dispatches the appro-
The general functional flow followed by each kernel follows
a similar set of steps. Figure 3 illustrates the steps followed
for MTI (Moving Target Indicator) data. The Prediction
function performs time-alignment across the sets of data
being fused to ensure that valid comparisons can be made.
The Clustering algorithms break down the battlespace into
geographically distinct clusters to limit the computational
Grapevine
Input
Module
JCDB
Input
Module
Group Fusion
Kernel
Intel Fusion
Kernel
Track
Management
Shared
Memory
Output Module
SENSOR 2
CLASS:
AIR DEFENSE
Entity
Air
Fused Common Picture
at time t
Prediction: Sensor Data
received at time t+1 and
time-aligned
Clustering: Data grouped
into geographic clusters
SENSOR 1
CLASS:
TRACKED
Land
Wheeled
Tracked
Armor
Resulting Class:
TRACKED AIR
DEFENSE
Artillery
Air Defense
Tracked ADU
Wheeled ADU
ZSU-23
Resulting ID:
ONE OF
THESE 4
2S-6
Cost Functions: Similarity
Metric applied to clusters
Assignment: Associations
formed based on cost values
Fusion: Algorithms produce
updated common picture at
time t+1
Support
SA-13
SA-15
Figure 3. Data fusion kernel functional flow.
Figure 4. Example of class fusion in RPA data fusion.
complexity of the following algorithms. Cost Functions operate on each cluster to compute a matrix of composite
similarity values between each input sensor data item and
the candidate fused tracks. The Assignment function uses the
optimal JVC (Jonker-Volgenant-Castanon) algorithm to
compute matches between sensor data and fused tracks.
Once the matches are identified, Fusion algorithms are applied to update the state of the fused trackfile based on the
associated sensor data.
context, implies that the agent is imbued with some degree
of knowledge of its environment or subject domain that allows it to make decisions that affect its behavior in response
to its changing environment or the problem at hand. Many
applications make use of mobile agents, that are able to
travel between nodes of a network to make use of resources
that are not locally available.
One of the key algorithmic advances of the RPA Data Fusion system was its ability to effectively combine the processing of kinematic (position and velocity) information with
the processing of Class (vehicle type), ID (specific vehicle
information), and IFF (friend/hostile) information. This
processing takes place during the comparison of sensor data
with fused tracks, by the Cost Functions, and during the updating of the fused trackfile with associated sensor data by
the Fusion algorithms. The Class Cost Function and Fusion
algorithms compare and combine Class and ID information
expressed in a class hierarchy (Figure 4). Each sensor report
or fused track has a class representation that specifies the
confidence of each node in the hierarchy. A set of Modified
Bayesian Evidence Combination algorithms developed by
ATL is used to compare, combine, and summarize this information. ATL’s work in this area [2] represented a major
advance in Data Fusion technology.
Intelligent Agents for Information Retrieval
and Dissemination
Since 1995, ATL has been developing technology for Intelligent Information Agents to support dissemination, retrieval, and monitoring of information on the digital battlefield [3]. An Intelligent Agent is a persistent software construct that is able to interact with its environment in order to
perform tasks on behalf of the user. Intelligence, in this
ATL has developed the Extendable Mobile Agent Architecture (EMAA) [4] that provides an infrastructure for the deployment of light-weight intelligent mobile agents. An agent
is launched at a processing node with a set of instructions
contained in an itinerary, a control construct that permits
highly flexible control over agent behavior. Based on conditions or information it encounters, the agent may need to
migrate to another node to continue performing its task or
locate needed information. EMAA makes use of the portability inherent in the Java™ language to migrate the agent
from its current processor to the target platform and execute
it on that processor. The original applications of EMAA involved the use of mobile agents for search, retrieval, and
dissemination of intelligence information in battlefield networks. Later applications exploited persistent Sentinel
Agents for monitoring data in distributed information systems to alert a user or client when certain conditions or
events occurred.
In order to meet the challenge of supporting information pull
as well as push, we began in 1999 to investigate the use of
Intelligent Information Agents to provide information to
supplement the data fusion process [5]. We focused on the
problem of improving the Common Tactical Picture (CTP)
by identifying areas in the fused track picture that could or
should be improved through the application of additional
data from other non-reporting sources.
An example of such an application is illustrated in Figure 5.
In this system, the output of Sensor Data Fusion is collected
Fusion
Input
Interface
Data Matching
Areas of Interest
Input From
Multiple
Sensors
JCDB
Data
Fusion
Information
Agent
Area Where
No data is
present
Area of
insufficient
Accuracy or
Latency
Common Tactical Picture
Sentinel Agent
Figure 5. Sentinel Agent used to augment CTP.
to form the basis of the CTP. A persistent Sentinel Agent
examines the CTP to determine areas where additional information is needed. This analysis is based on several criteria, including: (1) areas in which data accuracy or latency
does not meet requirements, possibly due to reliance on a
source with high latency or large positional error; and (2)
areas where no data are present, but where tactical requirements, expressed in the tactical plan, indicate a need for information.
When the Sentinel Agent identifies a need for additional
information, it dispatches an Investigation Agent to search
for the needed information in a remote data source, such as
the All-Source Analysis System (ASAS). The results of the
investigation are converted into a format usable by Data
Fusion, and passed as input to Data Fusion for incorporation
into the CTP. This approach has been investigated in an internal research and development program and integrated into
the ACT II Battle Commander’s Decision Aid at the US
Army Air Maneuver Battle Lab (AMBL) at Ft. Rucker, Alabama.
The implementation of the Grapevine architecture (Figure 6)
builds upon our previous work combining multi-sensor Data
Fusion with intelligent agents. Each node in the architecture
contains a Data Fusion process that fuses locally obtained
data (from local sensors and data sources) and data received
from other peer nodes. The Grapevine Manager at each node
manages the interchange of data with peer nodes. For each
peer node, it contains a Grapevine proxy agent that represents the information needs and capabilities of that peer
node. As the sensors or other sources on the platform generate local information, each grapevine agent evaluates that
information against the needs of the peer platform it represents for factors such as:
• Sensor type: Data from remote sensors, e.g. JSTARS, is
sent only if the recipient does not already have access to
that data.
• Mission: For example, the peer platform’s mission may or
may not require the propagation of friendly tracks.
• Location: The peer platform may only need information
within a geographic or temporal/geographic radius.
• Coverage: The peer platform may need information from
beyond its own sensor platform.
Grapevine
Node 1
Grapevine
Node 2
Data Fusion
System
Data Fusion
System
Grapevine
Manager
Grapevine
Manager
Proxy Agent 2
Proxy Agent 1
Proxy Agent 3
Proxy Agent 3
Grapevine
Node 3
Data Fusion
System
Grapevine
Manager
Proxy Agent 1
Legend
Grapevine Data Dissemination
The Grapevine architecture [6] was originally developed by
ATL for use on DARPA's Small Unit Operations (SUO)
program. It makes maximum use of bandwidth for information sharing by providing each node with a description of the
information needs of its peers, so that each node can selectively transmit only those bits of information that it understands to be of real value to its neighbors. By sharing relevant sensor data, each participant can build a common tactical picture that is consistent between participants, and is as
complete as the sum of all participants’ information sources
can make it.
Proxy Agent 2
Exchange of Information
Needs and Capabilities
Exchange of Sensor
Data based on Needs
Figure 6. Grapevine data dissemination architecture.
In addition, the Grapevine agents are aware of the processing and bandwidth limitations of the peer nodes and communication links. Data identified as relevant to a peer node
based on the above criteria may be down-sampled or prioritized to meet resource limitations. Each Grapevine agent
propagates the needed information to the peer platform it
represents, providing an intelligent push of data through the
network.
At the same time, the Grapevine Manager has a representation of the local platform’s information needs and capabilities, expressed in terms of available sensors and data
sources, mission, location, and sensor coverage. A Sentinel
Agent within the Grapevine Manager monitors the local
fused picture to identify information needs not met by the
local picture. Based on this, it sends out updated configuration data for the local platform to the Grapevine Manager on
peer platforms. This is used to update the Grapevine Agents
on the peer platforms that represent the local platform. This
propagation of information needs effects an intelligent pull
of data to meet the changing information needs of the local
platform.
There are several distinctive features of the Grapevine Architecture. First, it is a peer-to-peer network. Propagation of
data occurs between peer nodes in the network, although in
practice this would probably be implemented as an extension
to some form of hierarchical command and control system.
Second, propagation is needs based—peer-to-peer data
propagation includes only data known to be of use to the
recipient node, thus limiting the required processing and
bandwidth. Third, the architecture is extensible. It can accommodate the addition of peer nodes merely by reconfiguring nearby nodes to reflect the addition of the new nodes.
Fourth, it is survivable — there is no single point of failure.
Since, in general, each node will have multiple peers, data
can spontaneously reroute around missing nodes, and thus
the loss of any single node will only result in the loss of the
data sources local to that node.
on the AH-64D Longbow Apache aircraft, and the Mobile
Commander’s Associate (MCA) to be installed on the UH60 Blackhawk aircraft also equipped with the Army Airborne Command and Control System (A2C2S). Both systems include Data Fusion to provide a fused picture for
Situation Awareness (Figure 7), and both include capability
to provide Level 4 control of an Unmanned Air Vehicle
(UAV), with both waypoint-level control of the UAV from
the aircraft and direct feed of UAV sensor data to the aircraft. In addition, the WA provides decision aiding in support of the Apache pilot, including route planning and attack
planning, while the MCA provides decision aiding in support of a maneuver commander, including Situation Awareness Display, team route planning, and plan monitoring.
UH-60 Blackhawk
Commander
Mobile
Commander’s
Associate
(MCA)
Decision Aid
APPLICATION IN AMUST-D/HSKT
On the in-progress AMUST-D program and its successor the
HSKT ACTD, ATL is developing the Shared Situation
Awareness capability that will support the other functions of
Pilot and Commander Decision Aiding and Manned/Unmanned teaming. AMUST-D is developing two Decision
Aiding systems: the Warfighter’s Associate (WA) to be used
Pilot
A2C2S/
JCDB
Warfighter’s
Associate
(WA)
Decision Aid
Intelligent
Agent
Data
Discovery
Data
Fusion
Grapevine
Data Link
Interface
The result of this capability is to permit, in the face of stringent bandwidth and processing constraints, the creation of a
Common Relevant Picture (CRP) across all participating
platforms. The CRP is a shared picture of the battlefield,
with all participants having a consistent view of the world,
and each participant seeing that portion of the picture as it is
relevant to their needs. In the case of infinite processing and
bandwidth capabilities, this can scale to become a true
Common Operational Picture, with all participants seeing
the same complete picture. In the case of significant limitations on the ability to exchange and process information, as
is the case now and for the near future, the intelligent dissemination capability of the Grapevine ensures that all participants receive the most relevant information.
AH-64D Longbow Apache
External
Sources
Link-16
IDM
Data
Fusion
Apache
Sources
Grapevine
Data Link
Interface
Figure 7. Shared Situation Awareness architecture in
AMUST-D
On the WA aircraft, Data Fusion will receive and fuse data
from the Apache onboard sensor suite, from teammate aircraft, from UAV’s under control of the WA, and from offboard sources such as the Joint Surveillance and Targeting
System (JSTARS). On the MCA aircraft, Data Fusion will
receive and fuse data from UAV’s under control of MCA
and from offboard sources such as JSTARS. In addition, the
MCA will include the Intelligent Agent-based Data Discovery system. This will retrieve relevant blue force and red
force entity data from the Joint Common Database (JCDB)
in the A2C2S system and provide it to Data Fusion for incorporation in the fused picture. Data Discovery will also
augment fused tracks generated by Data Fusion with additional information available from the JCDB, such as plan
and status information in the case of friendly entities and
sensor and weapon capability information in the case of
hostile entities.
The Grapevine agent system used in AMUST-D represents a
specialized implementation of ATL’s intelligent agent technology in two ways. First, the Grapevine agents are implemented in C++ rather than in Java, to facilitate deployment
in operational systems with stringent performance and resource requirements. Second, the Grapevine agents are being
adapted to operate over non-TCP/IP networks, to facilitate
use in existing tactical data links. On AMUST-D, the grapevine implementation uses the AFAPD message set over the
Improved Data Modem (IDM) link to exchange data between peer aircraft. We are currently developing a prototype
of this IDM implementation, and in the spring and summer
of 2002 we will be conducting experiments to validate the
capability of the Grapevine to support Distributed Data Fusion with the bandwidth limitations and message set imposed by the IDM.
50x50 km
Battle Area
49 Randomly
Distributed
Targets
Sensor
Sensor Platform
Platform West
West
Time
Time == 00
PERFORMANCE STUDY
In order to demonstrate the performance cost-benefit tradeoffs of employing Data Fusion and the Grapevine, we conducted a simulation-based experiment in which a scenario
was played out without Data Fusion, with just Data Fusion,
with Data Fusion and full exchange of sensor data between
platforms, and finally with Data Fusion and the Grapevine.
Performance results presented in this section include data on
completeness and accuracy of the resulting tactical picture as
well as computational and bandwidth cost.
Methodology
The scenario that was used in the experiment is depicted in
Figure 8. In the scenario, two sensor platforms, (e.g. helicopters, or UAV’s), fly through a 50 by 50 kilometer square
battle area, throughout which are scattered 49 targets, (tanks,
and other vehicles). The targets are positioned according to a
uniform random distribution, throughout the battle area, and
each has a random velocity between 0 and 10 meters per
second. Each sensor platform carries two sensors. Both sensors on the platform have the same area of coverage, which
is a circle, centered on the sensor platform, with a radius of
20 kilometers. Each sensor is assigned values that define the
error inherent in its reporting of the range and azimuth of a
target. Each assigned error value is defined to be four times
the standard deviation of a normal distribution, giving a confidence interval of plus or minus two standard deviations, or
about 95%. One sensor on each platform has error values of
100 meters in range and 5 degrees in azimuth, while the
other sensor has error values of 1000 meters in range and 0.5
degrees in azimuth. The range and azimuth error values for a
particular measurement act as the orthogonal axes of an uncertainty ellipse that defines an 86% confidence region for
the true position of the target. Each sensor on each platform
reports on each target within its coverage area at the rate of
once each second, throughout the 30-minute scenario. One
Near 100% Sensor Coverage Overlap
Time = 1200
Sensor
Sensor Platform
Platform East
East
Time
Time == 00
20 km
Sensor Coverage Radius
Figure 8. The performance study scenario.
sensor platform starts at the southwest corner of the battle
area and maintains a 30 m/s velocity to the northeast
throughout the scenario. The other sensor platform starts at
the southeast corner and maintains a 30 m/s velocity to the
northwest. At the start of the scenario, the two platforms
have disjoint areas of sensor coverage. At about 4 minutes
into the scenario, the coverage areas begin to overlap. At
about 20 minutes into the scenario, the coverage areas overlap nearly 100%, after which, overlap begins to decrease
again.
Throughout the scenario, we collected performance measurements on CPU usage, number of reports being input to
Fusion, number of tracks resulting from fusion of those reports, the size of the uncertainty region of the sensor reports
and of the fused tracks, and the number of sensor reports
passed between sensor platforms. With these metrics, we are
able to conduct a cost-benefits comparison of the options of
No Data Fusion, Data Fusion, Data Fusion with Full Exchange of sensor data between platforms, and Data Fusion
with Grapevine.
The experiment was conducted using a pair of Sun Ultra 10
workstations, connected by a 100-megabit ethernet, which is
shared with a number of other nodes.
Results
Fusion Versus No Fusion. Multi-sensor Data Fusion provides significant improvements in the quality of the picture
available to a user at a given platform, as measured by
screen clutter and accuracy of track information.
2.5
Plots per Target
2
1.5
1
0.5
0
1
101
201 301
401
501
601
701 801
901 1001 1101 1201 1301 1401 1501 1601 1701
Time (seconds)
Without Data Fusion
With Data Fusion
Figure 9. With Data Fusion, the number of display
plots per target is reduced to near one.
Figure 10 addresses the issue of accuracy, by comparing the
average size of the uncertainty region for all the tracks over
the course of the scenario, with and without Data Fusion.
The uncertainty value is arrived at by taking the product of
the length and width of each elliptical uncertainty region.
Without fusion, the dimensions of the uncertainty region are
directly related to the range and azimuth errors (expressed in
meters) associated with each sensor measurement. With
Data Fusion, a new uncertainty region for a track is arrived
at by statistical combination of the uncertainty regions of the
contributing sensor measurements. Over time, this yields
tracks with much smaller uncertainty than the underlying
sensor data. The two curves, plotted on a logarithmic scale,
show that, in this scenario, uncertainty regions without Data
Fusion are on the order eight times the area of the uncertainty regions resulting from Data Fusion.
Cost: CPU Usage. The benefits of Data Fusion come at the
cost of increased CPU usage. Throughout the course of the
experiment, for each Data Fusion processing cycle, we gathered the number of sensor reports being input to Data Fusion
and the number of CPU seconds required to fuse them. Figure 11 shows the results as the average number of CPU seconds required by Data Fusion, plotted against the number of
sensor reports input to Data Fusion. The plot shows some
irregularity, due to artifacts of the sampling process, and the
lack of data for some values of input report count, but
nonetheless exhibits a roughly linear character. This is not
unexpected. The time required to perform Data Fusion increases superlinearly with respect to the average size of target clusters. However, given constant cluster size, the performance of Data Fusion is linear with respect to the input
size. Since the targets in the scenario are scattered about the
battlefield according to a uniform random distribution, the
cluster size remains roughly constant throughout the scenario, resulting in the linear character of the performance
curve in the figure.
0.3
0.25
CPU Seconds
Benefits: Less Clutter, More Accuracy. The benefits of employing Data Fusion can be expressed in terms of a variety
of metrics. The two easiest to measure and present are screen
clutter (plotted tracks per target), and accuracy, or the size of
the uncertainty region associated with a track’s reported position. Figure 9 addresses the issue of screen clutter by comparing the number of plotted tracks throughout the scenario
with and without Data Fusion. Without Data Fusion, each
sensor on the platform reports separately on the target for
each second of the scenario, resulting in a constant 2 plots
per detected target on the display. With Fusion, the sensor
reports are combined, so that with some variation, the average number of plots per detected target, over time, is just
under 1.1, resulting in a clearer, less cluttered track picture.
0.2
0.15
0.1
1000000
0.05
100000
0
Uncertainty (m2)
1
11
21
31
41
51
61
71
81
91 101 111 121 131 141 151 161 171 181 191
Input Reports per Second
10000
Figure 11. Data Fusion CPU usage increases
linearly with input size.
1000
100
10
1
1
101 201 301 401
501 601 701 801 901 1001 1101 1201 1301 1401 1501 1601 1701
Time (Seconds)
Without Data Fusion
With Data Fusion
Figure 10.With Data Fusion the confidence region
for track position is greatly improved.
Fusion With Full Data Exchange Versus Data Fusion
Without Data Exchange. Significant improvement in the
completeness and accuracy of the picture at any given platform can be achieved by the exchange of all available data
among platforms, and the resulting fusion at each platform
of all available data.
1000000
Uncertainty (m2)
100000
10000
1000
100
10
1
1
108 215 322 429 536 643 750 857 964 1071 1178 1285 1392 1499 1606 1713
Time (seconds)
No Exchange
Full Exchange
Figure 13. Data Fusion maintains improved
accuracy with full sensor data exchange.
usage. Each Data Fusion process requires additional CPU
time to process the sensor reports originating from the other
platform. Also, each sensor report sent between platforms
consumes communications bandwidth. From Figure 8, we
know that, in this scenario, Data Fusion CPU requirements
increase linearly with respect to an increase in input sensor
reports. With this in mind, we can see the additional processing burden imposed by sensor data exchange by looking
at the resulting increase in the rate of sensor reports input to
Data Fusion. Figure 14 illustrates the relationship between
the total number of sensor reports processed each second by
Data Fusion with full data exchange, versus without data
exchange.
350
300
Input Reports per Second
Benefit: Better Coverage. When executing Data Fusion on a
single platform, the platform’s tactical picture becomes more
clear and accurate, but without inclusion of off-board information, the tactical picture is not complete. However, by
sharing sensor data between sensor platforms, each sensor
platform can have an effective coverage area that is the union of the coverage areas of the individual platforms thus
providing a more complete tactical picture. To demonstrate,
we ran the scenario again, with Data Fusion, but this time
each sensor report that Data Fusion received from one of the
platform’s onboard sensors was also sent across a communications link (TCP/IP sockets, in this case) to be used as input
to Fusion on the other platform, as well. It is important to
note that no fused tracks were sent, but only the raw sensor
reports. The fusion of previously fused data presents special
problems that are avoided by exchanging sensor data. Figure
12 compares the coverage of each Data Fusion platform,
both with and without full exchange of sensor data. The coverage value is given by the number of targets seen by the
platform, expressed as a percentage of the total number of
potential targets on the battlefield (a total of 50 targets, including the other sensor platform). The figure shows that,
throughout the scenario, the exchange of sensor data results
in a significantly higher, and essentially identical, coverage
value for each sensor platform, with the exception of a period of time around the 1200-second mark, when the coverage areas of the individual platforms overlap nearly completely. It is worth noting that the exchange of sensor data
results in the two Data Fusion platforms seeing sets of tracks
that are equal not only in number, but also in identity. The
result is that the two platforms share a common tactical picture on which to base cooperative actions. Figure 13 shows
that the benefits of data exchange do not come at the expense of accuracy, since the uncertainty in the position of the
fused tracks is essentially equivalent with or without data
exchange.
250
200
150
100
50
1
0.9
0
1
0.8
101 201
301
401 501
601 701
801 901 1001 1101 1201 1301 1401 1501 1601 1701
% Coverage
Time (seconds)
0.7
No Exchange
0.6
0.5
Full Exchange
Figure 14. Full data exchange multiplies the
total computational burden.
0.4
0.3
0.2
0.1
0
1
101
201
301
401
501
601
701
801
901 1001 1101 1201 1301 1401 1501 1601 1701
Time (seconds)
No Exchange East
Full Exchange East
No Exchange West
Full Exchange West
Figure 12. Coverage of the battlefield is improved,
for both platforms, with data exchange.
Cost: Bandwidth and CPU Usage. The cost of exchanging
all this sensor data comes in the form of increased resource
Variability in the upper curve is attributed to communications link timing. It should be no surprise that, since every
sensor report produced by a sensor is used as input to the
local Data Fusion process, and is also sent across the communications link to be used as input to the other Data Fusion
process, the total number of reports processed with exchange
is twice the number without exchange. Figure 15 shows the
communications cost of full data exchange, in terms of messages sent by each platform individually, as well as the combined total number of messages transmitted. It is important
160
140
120
100
80
60
40
20
0
1
101
201 301
401 501 601 701 801 901 1001 1101 1201 1301 1401 1501 1601 1701
Time (seconds)
East
West
Total
Figure 15. Message traffic resulting from
full sensor data exchange.
to note that, around the 1200 second mark in the scenario,
the total message traffic is near its peak, but benefit derived
from the message traffic is at its minimum. At that time in
the scenario, the sensor coverage areas of the two platforms
are almost completely overlapped. As a result, very few of
the messages convey information that the recipient does not
already have from is own onboard sensors. Precious bandwidth is being wasted passing redundant information.
Grapevine Versus Full Data Exchange. Nearly equivalent
improvements in completeness and accuracy can be
achieved, at a significant reduction in bandwidth and processing cost, through the use of Grapevine technology for
intelligent dissemination of information.
Benefits: Decreases Bandwidth and CPU Cost. The Grapevine mitigates the costs associated with full data exchange,
while retaining the benefits, by restricting message production to just those sensor reports that provide benefit to the
other platform. For instance, it was noted that in the case of
full data exchange, when there is substantial overlap in sensor coverage areas between platforms, a substantial portion
of the message traffic is of little or no benefit to the receiving platform, since that platform already has reports on the
same target from its own sensors. Grapevine acts as a filter,
to make sure that if the other platform doesn’t need a report,
it isn’t sent. In this experiment, a receiving sensor platform
is deemed to need a sensor report if the position of the target,
as given by the report, is not currently within the sensor coverage area of the receiving platform. In addition, in this experiment, Grapevine further reduces resource burden
through a down sampling strategy in which the reports from
every other reporting cycle of each sensor are simply discarded by Grapevine. As a result, only half of those reports
that would have been deemed to be needed by the receiving
platform are actually sent.
picted by Figures 16 and 17 respectively. Figure 16 shows
the impact on message traffic by comparing the total number
of messages exchanged with full data exchange, versus with
the Grapevine. At the start of the scenario, when there is
little sensor coverage overlap between platforms, the message traffic is cut in half by Grapevine due to the 50% downsampling. As the scenario continues, sensor coverage overlap grows, and the gap between full exchange and Grapevine
widens due to the reduction in the need for sharing, as determined by Grapevine. Around the 1200 second mark,
when sensor coverage overlap is near 100%, Grapevine determines that neither platform has sensor reports that are
needed by the other, and so message traffic falls to near zero.
As the scenario continues, sensor overlap begins to decrease
again, Grapevine perceives an increase in benefit to be had
from message exchange, and message traffic increases. Figure 17 shows the impact of Grapevine on CPU usage by
comparing the total number of sensor reports processed by
Data Fusion with full exchange versus Grapevine. The reduction in the number of input reports by Grapevine is directly attributable to the difference in message traffic between the two approaches. The idea, again, is that computational cost is only increased by Grapevine when there is a
perceived benefit.
200
180
Total Messages per Second
Messages Sent per Second
180
160
140
120
100
80
60
40
20
0
1
101
201
301
401
501
601
701
801
901 1001 1101 1201 1301 1401 1501 1601 1701
Time (seconds)
Full Exchange
350
300
250
200
150
100
50
0
1
101 201
301
401
501
601 701
801 901 1001 1101 1201 1301 1401 1501 1601 1701
Time (seconds)
Full Exchange
The benefits of this approach can be seen by the impact it
has on communication bandwidth and CPU usage, as de-
Grapevine
Figure 16. Use of the Grapevine reduces bandwidth
cost over full data exchange.
Input Reports per Second
200
Grapevine
Figure 17. Grapevine reduces computational cost
by reducing input reports from other platforms.
To complete the comparison of Data Fusion with Grapevine
versus Data Fusion with full data exchange, we need to also
look at what effect the Grapevine has on the benefits that
were provided by Data Fusion with full exchange. Figure 18
shows that the benefits of greater coverage that were provided by full exchange are nearly unaffected by use of the
Grapevine in place of full exchange. Figure 19 shows that
while there is some increase in error using the Grapevine,
the level of error is still significantly lower than if Data Fusion were not used. Furthermore, it is important to realize
that any increase in average error is due almost entirely to
the Grapevine’s selective reporting on those targets that
would not be seen at all by the sensor platform without some
form of sensor data exchange.
1
0.9
0.8
% Coverage
0.7
To summarize: The use of Data Fusion offers the benefits of
lowering display clutter and increasing positional accuracy
at a modest computational cost. Exchanging sensor data
between sensor platforms offers the benefits of increased
sensor coverage to each platform and a shared tactical picture between platforms. Using intelligent, selective data exchange, such as that provided by Grapevine, can mitigate the
resource usage costs incurred by full data exchange, with
minimal cost tradeoff in the form of coverage and/or accuracy.
A variety of alternative strategies may be employed by the
Grapevine for intelligent selective data dissemination. Future
work is planned to study additional Grapevine data dissemination strategies, as well as techniques for tailoring these
strategies to provide maximum operational benefit in the
face of dynamic and widely varying resource constraints,
environmental challenges, and operational goals
0.6
CONCLUSIONS
0.5
0.4
0.3
0.2
0.1
0
1
101
201
301
401
501
601
701
801
901 1001 1101 1201 1301 1401 1501 1601 1701
Time (seconds)
Full Exchange, East
Grapevine, East
Full Exchange, West
Grapevine, West
Figure 18. Use of the Grapevine does not reduce
battlefield coverage.
1000000
Uncertainty (m2)
100000
The development of Situation Awareness on the digital battlefield, critical to the support of mobile command of distributed forces, faces numerous challenges. It requires the
ability to integrate information from all available sources
and to share information between forces to the maximum
extent possible, yielding a Common Relevant Picture. In this
paper, we have described three technologies developed at
Lockheed Martin Advanced Technology Laboratories that
enable the creation of the CRP:
• Real-Time Multi-Sensor Data Fusion
• Intelligent Information Agents
• Grapevine Information Dissemination
10000
1000
100
10
1
1
101 201 301 401 501 601 701 801 901 1001 1101 1201 1301 1401 1501 1601 1701
Time (seconds)
Full Exchange
Grapevine
Figure 19. Uncertainty in target position remains
low with Grapevine.
Cost: CPU usage. The cost of employing the Grapevine is
the computational cost of applying the filtering criteria to
sensor reports that are candidates for sharing. While this cost
was not measured in this experiment, it is expected that the
computational cost of applying filtering to a sensor report at
the transmitting end is small compared to the cost of fusing a
sensor report at the receiving end.
We have presented quantitative results validating the following claims about the value of these technologies:
• Multi-sensor Data Fusion provides significant improvements in the quality of the picture available to a user at a
given platform, as measured by screen clutter and accuracy of track information.
• Significant improvement in the completeness and accuracy of the picture at any given platform can be achieved
by the exchange of all available data among platforms,
and the resulting fusion at each platform of all available
data.
• Nearly equivalent improvements in completeness and accuracy can be achieved, at a significant reduction in
bandwidth and processing cost, through the use of Grapevine technology for intelligent dissemination of information.
We have described ongoing work intended to provide further
validation of the benefits obtained through the use of the
Grapevine technology using actual tactical data link hard-
ware, software, and message sets. Finally, we have described
work currently in progress under the AMUST-D program
that will lead to the deployment of these technologies in
flight tested decision aiding systems, with a full scale Military Utility Assessment (MUA) to be performed under the
HSKT program. Based on the presented results and the ongoing work, we are confident that these technologies will
form the foundation of the shared situation awareness capability that will support mobile commanders in all elements of
the digital battlefield of the future.
ACKNOWLEDGMENTS
This research was partially funded by the Aviation Applied
Technology Directorate under Agreement No. DAAH10-012-0008. The U.S. Government is authorized to reproduce
and distribute reprints for Government purposes notwithstanding any copyright notation thereon. A discussion of the
combination of these three technologies in support of the
AMUST-D and HSKT programs has not been previously
published.
DISCLAIMERS
The views and conclusions contained in this document are
those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of
the Aviation Applied Technology Directorate or the U.S.
Government.
REFERENCES
[1] Malkoff, D. and Pawlowski, A., “RPA Data Fusion,”
9th National Symposium on Sensor Fusion, Vol. 1, Infrared Information Analysis Center, pp. 23-36, September 1996
[2] Hofmann, M., “Multi-Sensor Track Classification in
Rotorcraft Pilot's Associate Data Fusion” American
Helicopter Society 53rd Annual Forum, Virginia Beach,
Virginia, April 29 - May 1, 1997.
[3] Whitebread, K. and Jameson, S., “Information Discovery in High-Volume, Frequently Changing Data,”
IEEE Expert/Intelligent Systems & Their Applications,
Vol. 10, (5), October 1995.
[4] Lentini, R., Rao, G., and Thies, J., “EMAA: An Extendable Mobile Agent Architecture,” AAAI Workshop,
Software Tools for Developing Agents, July 1998.
[5] Pawlowski, A. and Stoneking, C., “Army Aviation Fusion of Sensor-Pushed and Agent-Pulled Information,”
American Helicopter Society 57 th Annual Forum,
Washington DC, May 9-11, 2001.
[6] Jameson, S.M., “Architectures for Distributed Information Fusion To Support Situation Awareness on the
Digital Battlefield,” Fourth International Conference on
Data Fusion, Montreal, Canada, August 7-10, 2001
Fly UP