ТОП просматриваемых книг сайта:
Simulation and Wargaming. Группа авторов
Читать онлайн.Название Simulation and Wargaming
Год выпуска 0
isbn 9781119604808
Автор произведения Группа авторов
Издательство John Wiley & Sons Limited
Data
The simulations described above usually need between three to six months to instantiate a new scenario, and will cost around one million US dollars to get the simulation ready to run (terrain and performance data developed, quality controlled, and input, and scheme of maneuver developed, instantiated, and tested). Data is always challenging. Performance data must be developed to account for every interaction that could happen between all systems that will be represented on the battlefield. Performance data development can be especially challenging when examining future scenarios with emerging technology. Even developing data to simulate today’s forces comes with challenges. The US Army has perhaps one of the most robust processes to develop performance data, but even that process uses only about 10% of actual data. This data is collected from ranges such as Aberdeen Proving Grounds where, in a controlled environment, US Army weapon systems are fired at captured enemy systems to determine their vulnerability to US weapons. They also use captured enemy systems to fire at actual US Army systems to determine their vulnerabilities. Often several US ground combat vehicles are rolled off the production line with the expressed intent of testing their vulnerabilities to enemy systems. After test firing is conducted, engineers determine the damage caused and record that information, which becomes the basis for the performance data that is generated for ground combat simulations. The other 90% of the data is then “surrogated,” that is, interpolated, extrapolated, or otherwise estimated from that existing test data. This data is often developed using engineering‐level simulations. Ground combat weapon systems are relatively inexpensive and numerous so testing their vulnerabilities can be done given the availability of the appropriate enemy weapon systems and ammunition. The Navy and the Air Force are challenged to come up with test data that can be used to develop performance data against their platforms. Firing captured adversary anti‐ship missiles at a multibillion‐dollar Ford class aircraft carrier to see how many hits it can withstand before sinking just is not possible, so often the data used is more of an educated guess than a mathematical approximation. One of the biggest threats to today’s naval vessels is the anti‐ship missile (ASM), but there have been less than 300 recorded instances of ASM hits on vessels that could be used to develop data.31
Simulating the Reality of Combat
Many, if not most, of today’s computer‐based combat simulations are extraordinarily complex (the Concepts Evaluation Model, a theater‐level deterministic closed‐loop combat simulation used by the US Army’s Concepts Analysis Agency in the late twentieth century was over 250 000 lines of computer code). However, that complexity does not translate into a model that can accurately predict the outcome of combat. This complexity has given rise to two different schools of thought. Simulation skeptics refer to them as “black boxes,” which means that the users of these simulations have little to no understanding of the simulation’s processes that convert inputs into outputs. Some simulation advocates believe that with the complexity that these simulations have, all the processes of combat are modeled to a high degree of precision, thus the simulation’s outputs must be believed without question or debate. Subscribers to either school miss the fundamental truth that the models in these simulations are abstractions and approximations of some specific aspects of combat that the simulation was originally designed to model. All simulations are comprised of one or more models, and all models are an abstraction of reality, with some processes modeled explicitly, some implicitly, and some processes not modeled at all because the simulation’s designer did not intend for the simulation to address those excluded processes. Prospective simulation users need to do some research and come to at least a basic understanding of what a simulation under consideration for use in a particular study was originally designed to model, and what its strengths and shortcomings are before they select a simulation that will be useful for the purpose at hand. In a RAND paper examining non‐monotonic, chaotic output from a very simple deterministic, Lanchester‐based combat simulation (2 variables, 18 data elements, and 8 rules), the authors state “The typical model simulates combat between opposing forces at some level of abstraction. No combat model is seriously expected to be absolutely predictive of actual combat outcomes. It is common, however, to expect models to be relatively predictive. That is, if a capability is added to one side and the battle is refought, the difference in battle outcomes is expected to reflect the contribution of the added capability.”32 This concept of relative predictability underpins the usage of simulations to conduct Analyses of Alternatives, or AoAs, studies that are used to justify weapon system acquisitions in DoD.
The Capability and Capacity of Modern Computing to Represent Combat
Moore’s law, the idea that the processing power of computers doubles every 18–24 months, has led to the school of thought that our simulations are accurate representations of kinetic combat. However, most of the combat simulations in use today have their roots in the 1960s or 1970s, and in most cases, the computer code has not been optimized to take advantage of this increase in processing power. But even more significant is the notion of artificial intelligence (AI) and the idea that we can accurately represent, to the minute detail, every aspect of combat. The advances in AI that should dissuade us of this notion begin with IBM’s computer “Deep Blue” that was programmed to play chess. In 1997, Deep Blue was able to beat Garry Kasparov, at the time a reigning chess champion.33 A more recent AI triumph was “AlphaGo.” AlphaGo was programmed to play the ancient game of Go, and was able to beat world Go champion Ke Jie in 2017.34 “As Deep Blue and AlphaGo have demonstrated, in games of finite size with well‐specified rules, computers can use artificial intelligence (AI) techniques to top human performance.”35
Let us now begin to examine ground combat and see if it fits the description of the type of games AI has successfully mastered, specifically finite size and well‐specified rules.
Finite Size
For finite size, we need to consider the number of pieces, or entities, and the size of the game board, or terrain box.
Number of Pieces/Entities
Chess has six different types of pieces: pawn, knight, bishop, rook, king, and queen, for a total of 16 pieces per side, or 32 total pieces. The US Army’s Armored Brigade Combat Team (ABCT) has over 4700 soldiers and over 1300 vehicles, including 90 tanks, 90 infantry fighting vehicles, 112 armored personnel carriers, 18 self‐propelled howitzers, and 4 unmanned aerial vehicles.36 This does not include other assets that would support the ABCT in combat, such as Army and Air Force attack aircraft, and Army logistics assets. When an ABCT is represented in a combat simulation, only the primary combatants are represented, which would be about 300 vehicles and 200 infantry, or 500 of the 6000 entities that would actually be on the battlefield. Assuming the ABCT’s adversary was similarly equipped and a similar size, the total entities that would be represented in the simulation would be around 1000, as compared to the 32 chess pieces.
Terrain
The