The following article first appeared on Warrior Maven, a Military Content Group member website.
What if a US Navy unmanned surface vessel’s vertical towed array sonar trailing beneath the surface detects an enemy submarine in position to attack US surface warships with torpedoes and instantly shares critical time-sensitive data with both undersea and aerial drones in position to respond? A forward-positioned undersea drone might either function as an explosive to attack the enemy submarine or transmit targeting data to a US Navy submarine in position to attack from safer stand-off distances. Perhaps an aerial drone or helicopter can use laser scanning and EO/IR targeting to find and destroy the enemy submarine target when it comes close to the surface, by virtue of receiving location data from surface and undersea drones. Of greatest significance, a decision to attack and destroy a manned enemy submarine using the processed and networked intelligence can be made by human decision makers performing command and control from the surface, air or undersea. This kind of scenario, drawing upon both the speed of AI-enabled data processing and human decision-making faculties, is precisely the kind of Concept of Operation now being pursued by US weapons developers both within and across the military services. With US military progress advancing these technologies and concepts at lightning speed, many are likely to wonder what US adversaries such as Russia and China are doing in this area.
“Manned-unmanned teaming” and “human-machine interface” could be described as Pentagon “favorite terms” in the realm of weapons development and concepts of operation aimed at preparing for future warfare. Such an approach is deeply grounded in a clear recognition that any kind of future warfare engagement is best approached using a carefully blended or integrated combination of high-speed, AI-enabled analytics, autonomy and robotics and certain attributes unique to human decision-making. The conceptual or even “philosophical” foundation of this approach maintains that of course specific functions including data organization, analysis, high-speed processing and problem solving can be done exponentially faster and more efficiently than human. At the same time, Pentagon weapons developers operate with the widespread recognition that there are faculties and attributes specific to human consciousness and cognition that mathematically-generated computer algorithms simply cannot replicate.
Modern Combined Arms Maneuver
This US approach continues to generate promising combinations of next-generation technology and human-envisioned concepts of operation in preparation for future armed conflict, yet will US adversaries approach this critical and nuanced blending of manned-unmanned teaming in a similar fashion? Perhaps not, according to a significant new Army intelligence report publishing research findings related to the anticipated combat environment expected to define the coming decade. Among many things the Army’s “The Operational Environment 2024-2034 Large-Scale Combat Operations.” (US Army Training and Doctrine Command, G2), examines robotics, AI, unmanned-systems, sensing, weapons usage and evolving doctrinal and strategic adjustments to new threats. Major rivals such as the People’s Republic of China, the Army report maintains, are pursuing manned-unmanned teaming with comparable intensity. In particular, it appears the PLA is attempting to replicate or copy the fast-evolving US progress connecting manned and unmanned systems across multiple domains simultaneously.
“China is focused on developing teaming software that could be used for unmanned underwater and surface vessels under multiple configurations. It is funding research in manned-unmanned teaming, which could provide significant battlefield gains as neither a human nor machine acting on its own is as effective as both working in tandem,” the report writes.
The following article first appeared on Warrior Maven, a Military Content Group member website.
What if a US Navy unmanned surface vessel’s vertical towed array sonar trailing beneath the surface detects an enemy submarine in position to attack US surface warships with torpedoes and instantly shares critical time-sensitive data with both undersea and aerial drones in position to respond? A forward-positioned undersea drone might either function as an explosive to attack the enemy submarine or transmit targeting data to a US Navy submarine in position to attack from safer stand-off distances. Perhaps an aerial drone or helicopter can use laser scanning and EO/IR targeting to find and destroy the enemy submarine target when it comes close to the surface, by virtue of receiving location data from surface and undersea drones. Of greatest significance, a decision to attack and destroy a manned enemy submarine using the processed and networked intelligence can be made by human decision makers performing command and control from the surface, air or undersea. This kind of scenario, drawing upon both the speed of AI-enabled data processing and human decision-making faculties, is precisely the kind of Concept of Operation now being pursued by US weapons developers both within and across the military services. With US military progress advancing these technologies and concepts at lightning speed, many are likely to wonder what US adversaries such as Russia and China are doing in this area.
“Manned-unmanned teaming” and “human-machine interface” could be described as Pentagon “favorite terms” in the realm of weapons development and concepts of operation aimed at preparing for future warfare. Such an approach is deeply grounded in a clear recognition that any kind of future warfare engagement is best approached using a carefully blended or integrated combination of high-speed, AI-enabled analytics, autonomy and robotics and certain attributes unique to human decision-making. The conceptual or even “philosophical” foundation of this approach maintains that of course specific functions including data organization, analysis, high-speed processing and problem solving can be done exponentially faster and more efficiently than human. At the same time, Pentagon weapons developers operate with the widespread recognition that there are faculties and attributes specific to human consciousness and cognition that mathematically-generated computer algorithms simply cannot replicate.
Modern Combined Arms Maneuver
This US approach continues to generate promising combinations of next-generation technology and human-envisioned concepts of operation in preparation for future armed conflict, yet will US adversaries approach this critical and nuanced blending of manned-unmanned teaming in a similar fashion? Perhaps not, according to a significant new Army intelligence report publishing research findings related to the anticipated combat environment expected to define the coming decade. Among many things the Army’s “The Operational Environment 2024-2034 Large-Scale Combat Operations.” (US Army Training and Doctrine Command, G2), examines robotics, AI, unmanned-systems, sensing, weapons usage and evolving doctrinal and strategic adjustments to new threats. Major rivals such as the People’s Republic of China, the Army report maintains, are pursuing manned-unmanned teaming with comparable intensity. In particular, it appears the PLA is attempting to replicate or copy the fast-evolving US progress connecting manned and unmanned systems across multiple domains simultaneously.
“China is focused on developing teaming software that could be used for unmanned underwater and surface vessels under multiple configurations. It is funding research in manned-unmanned teaming, which could provide significant battlefield gains as neither a human nor machine acting on its own is as effective as both working in tandem,” the report writes.
The text of the report also examines some of the variations, complexities and different approaches informing how countries will integrate AI and unmanned systems into its Concepts of Operation. andOne key finding, according to the report, is that not only will future warfare be driven by AI, unmanned systems and ubiquitous “sensors” creating a “transparent” battlefield, but that major adversaries or rivals such as China appear to be prioritizing “science” of AI, autonomy and computing above the “art” or human components to combat decision making. This emphasis introduces key implications addressed in the report.
“China’s leadership is concerned about corruption within the PLA’s ranks, especially at the lower levels, and to the extent possible wants to remove the individual soldier from the decision-making process in favor of machine-driven guidance. This is in stark contrast to the U.S. Army’s way of war, which relies heavily on warfare as an artform, as the report describes. The U.S. Army sees its Soldiers as its greatest advantage in battle and relies on their intuition, improvisation, and adaptation to lead to victory.” The text of the Army’s Operational Environment 2024-2034, Large Scale Combat Operations states.
Can advanced AI-enabled algorithms incorporate more subjective phenomena fundamental to human decision-making such as emotion, ethics and the mix of variables informing the psychology of human decision-making? This belief of the primacy of human decision-making, Pentagon weapons developers maintain, is particularly critical when it comes to decisions about the use of lethal force. This does not mean or suggest that AI-enabled computing can’t perform time-sensitive warfare tasks with accuracy, precision and speed but rather that an “optimal” approach to warfighting and modern Combined Arms Maneuver requires a key mixture of what’s best with both AI-empowered systems and human cognition. Sure enough, advanced US weapons developers and industry partners are making progress working on advanced algorithms increasing able to make what could be called more “subjective” determinations, such as discerning the difference in meaning between dance “ball” and tennis “ball” by examining a wide range of variables to include context and surrounding words. This being said, many are of the view that even advancing or next-generation AI-information algorithms looking more holistically at a host of variables and indicators in relation to one another in real time, human consciousness simply cannot be “replicated” by computers. This is particularly significant, the Army report indicates, when it comes to decisions about lethal force and the value of human’s weighing and analyzing the “art” of war alongside the “science.”
Philosophical Influence?
It does not seem like a stretch to view the US military’s ethics and beliefs regarding the inherent “subjectivity” or “artistic” elements of human consciousness in relation to Western philosophical renderings of human consciousness, perception and epistemology (theory of knowledge). 18th-Century German philosopher Immanuel Kant, for example, arguably expanded US, European and global thinking about human consciousness through hais intellectual renderings of human thought and perception. In his famous 1781 “Critique of Pure Reason,” Kant makes the case for the inherent “subjectivity” of individual perception by suggesting that human beings do not all perceive and interpret the external world in precisely the same fashion. In several of his works, Kant explained that one elements of the external word are “perceived” or “taken-in” by the human mind, they become part of subjective cognition and are therefore subject to the wide ranges of factors and different variables determining human consciousness and perception.
For this reason, Kant occupies a special and valuable place within the trajectory of Western thought, as he operates as a bridge in a certain way into more modern notions of the human mind. Many philosophers preceding Kant known as the Empiricists(Locke, Hume), maintained that the mind was merely a kind of “mirror” of a certain reflecting the same external reality or set of conditions for everyone. Kant however, and the English Romantics who followed him, alternatively viewed human consciousness as inherently “subjective” and, as Kant puts it, part of “subjective cognition.” Simply put, Kant argued that the same set of external circumstances, which might be thought of as uniform, are not interpreted or understood the same way by “individuals.” Instead of functioning primarily as a mirror reflecting the same external reality, the mind operates more like a “lamp” shedding its own light upon the process of human perception. Sure enough, a famous US Cornell University scholar in the 1950s known as M.H. Abrams wrote the now famous “Mirror and the Lamp” critique on the evolution of epistemological understanding throughout the 1700s and 1800s into today. The human mind and imagination, Abrams maintained, functioned more like a “lamp” than a “mirror,” and that Individuals each perceive and interpret the surrounding set of external conditions differently, a reality we mostly take for granted these days. How do AI-enabled algorithms approximate this…? Can AI-generated systems evaluate the somewhat ineffable or less “quantifiable” variables woven into human imagination, intention, emotion, ethics or intuition?
The Pentagon’s doctrinal approach to AI and its “human-in-the-loop” philosophy could arguably have been influenced by generations of American and European philosophical thinking about consciousness. While we may see computing automation and AI-enabled weapons for purely defensive reasons in the future, humans are arguably best positioned to determine which “human” is the correct one to attack to save lives in war. What if a computer interprets an innocent civilian for an enemy combatant? Will it make an accurate determination in every instance all the time? What if an explosion of an enemy target will generate fragmentation in an urban area, killing civilians? What if an AI-enabled targeting system is spoofed or gets a “false” positive? The Pentagon is aware of these things, which is why US military weapons developers and their industry partners are pursuing what’s called “zero trust,” a term identifying ongoing efforts to make AI more consistently reliable and accurate.