Conversations on Strategy Podcast

U.S. Army War College Public Affairs

Conversations on Strategy features quick analyses of timely strategic issues. Topics are geared toward senior military officials, government leaders, academicians, strategists, historians, and thought leaders interested in foreign policy, strategy, history, counterinsurgency, and more. The series first aired in March 2022 and includes more than 25 episodes that range in length from 15–30 minutes long. Guests include Press authors and subject matter experts from the US Army War College and other PME and academic institutions who discuss hot topics like the Russia-Ukraine War, China, Taiwan, artificial intelligence, manned-unmanned teaming, infrastructure, terrorism, urban warfare, the Middle East, and more. The entire series can be found at: https://www.dvidshub.net/podcast/581/conversations-on-strategy-podcast read less
EducationEducation

Episodes

Conversations on Strategy Podcast – Ep 2 – Dr. Roger Cliff – Broken Nest - China and Taiwan (Part 2
Dec 6 2023
Conversations on Strategy Podcast – Ep 2 – Dr. Roger Cliff – Broken Nest - China and Taiwan (Part 2
This podcast analyzes the cutting-edge understandings of deterrence with empirical evidence of Chinese strategic thinking and culture to build such a strategy and explores the counter-arguments from Part 1 of this series. Read the original article: https://press.armywarcollege.edu/parameters/vol51/iss4/4/ Keywords: China, Taiwan, CCP, PRC, Broken Nest, USA Episode Transcript: Stephanie Crider (Host) (Prerecorded Conversations on Strategy intro) Decisive Point introduces Conversations on Strategy, a US Army War College Press production featuring distinguished authors and contributors who explore timely issues in national security affairs. The views and opinions expressed on this podcast are those of the podcast’s guest and are not necessarily those of the Department of the Army, the US Army War College, or any other agency of the US government. The guests in speaking order on this episode are: (Guest 1 Dr. Roger Cliff) (Cliff) Conversations on Strategy welcomes Dr. Roger Cliff. Dr. Cliff is a research professor of Indo-Pacific Affairs in the Strategic Studies Institute at the US Army War College. His research focuses on China’s military strategy and capabilities and their implications for US strategy and policy. He’s previously worked for the Center for Naval Analyses, the Atlantic Council, the Project 2049 Institute, the RAND Corporation, and the Office of the Secretary of Defense. (Host) The Parameters 2021-22 Winter Issue included an article titled, “Broken Nest: Deterring China from Invading Taiwan.” Authors Dr. Jared M. McKinney and Dr. Peter Harris laid out an unconventional approach to the China-Taiwan conundrum. Shortly after the article was published, Parameters heard from Eric Chan, who disagreed with them on many fronts. We’ve invited you here today, Roger, to provide some additional insight on the topic. Let’s jump right in and talk about “Broken Nest: Deterring China from Invading Taiwan. What is the essence of Jared McKinney and Peter Harris’s article “Broken Nest: Deterring China from Invading Taiwan?” (Cliff) So this article is an attempt to find an innovative solution to the Taiwan problem that has bedeviled the United States since 1950. In this particular case, the author’s goal is not to find a long-term, permanent solution of the problem, but simply to find a way to deter China from using force against Taiwan in the near term. Specifically, a way that doesn’t entail risking a military conflict between two nuclear-armed superpowers. Their proposed solution is a strategy of deterrence by punishment, whereby even a successful conquest of Taiwan would result in unacceptable economic, political, and strategic costs for Beijing. The premise of the article is that China’s military is now capable enough that it could conquer Taiwan, even if the United States intervened in Taiwan’s defense. The result, they argue, is that the long-standing US deterrence-by-denial strategy for deterring a Chinese use of force against Taiwan—in other words, by threating Beijing with the risk that a use of force against Taiwan would fail—is no longer credible. Unlike most strategies of deterrence by punishment, the strategy that McKinney and Harris proposed does not primarily rely on military attacks on China. Instead, the punishment comes in the form of imposing other costs on China for a successful use of force against Taiwan. This has several elements. One is the United States selling to Taiwan weapon systems that will be most cost-effective and defending against a Chinese invasion. This would make a successful invasion of Taiwan more difficult and, therefore, more costly for China. Related to this, they also recommend that Taiwan’s leaders prepare the island to fight a protracted insurgency, even after Taiwan’s conventional military forces have been defeated. The most important element of their strategy, however, consists of the United States and Taiwan laying plans for what they call “a targeted, scorched-earth strategy” that would render Taiwan not just unattractive, if ever seized by force, but positively costly to maintain. According to McKinney and Harris, this could be done most effectively by threatening to destroy facilities belonging to the Taiwan Semiconductor Manufacturing Company, which they say is the most important computer chipmaker in the world. They would also encourage Taiwan to develop the means to target the mainland’s own microchip industry and by preparing to evacuate to the United States highly skilled Taiwanese working in its semiconductor industry. McKinney and Harris say that a punishment strategy should also include economic sanctions on China by the United States and its major allies, such as Japan. And possibly giving a green light to Japan, South Korea, and Australia to develop their own nuclear weapons. At the same time as threatening increased cost to China for using force against Taiwan, the authors also advocate decreasing the cost to Beijing of not using force against Taiwan. Specifically, they recommend that Washington reassure Beijing that the United States will not seek to promote Taiwan’s independence. (Host) We got some pretty strong pushback from Eric Chan. In fact, he wrote a reply to this article. Can you break that down for our listeners and explain the essence of Chan’s response to the article? (Cliff) In his response to McKinney and Harris’s article, Eric Chan of the US Air Force makes three main critiques. First, he questions their assertion that attempting to maintain deterrence by denial would result in an arms race between the United States and China, pointing out that China has already been engaged in a rapid buildup of its military capabilities for the past quarter century, even while the United States has been distracted by the war on terror and its counterinsurgency campaigns in Iraq and Afghanistan. Second, Chan finds McKinney and Harris’s recommendations for reducing the cost to Beijing of not using force against Taiwan to be unconvincing. In particular, he disagrees with their claim that Taiwan is moving farther away from mainland China, pointing out that polling in Taiwan has repeatedly found that the vast majority of people there favor a continuation of Taiwan’s current ambiguous status. Therefore, Chan implies, there is essentially no cost to Beijing for not using force against Taiwan as Taiwan is not moving farther in the direction of independence. Chan also points out that the reassurances that McKinney and Harris recommend that the United States offer to Beijing are in fact things that the US is already doing. Chan’s third critique is that the cost of China of the punishments that McKinney and Harris recommend compared to the costs that Beijing would already have to bear as a result of fighting a war of conquest over Taiwan are insufficient to provide any additional deterrent value. For example, he points out that the economic cost to China of destroying the Taiwanese and Chinese semiconductor industries would be minor compared to the enormous economic damage that any cross-strait war would inevitably cause to China. Similarly, he argues that the prospect of Taiwan fighting a protracted counterinsurgency campaign would be of little deterrent to a Chinese government that has decades of experience brutally crushing popular resistance. After critiquing this strategy recommended by McKinney and Harris, Chan asserts that the only way of deterring China is to demonstrate an ability to destroy a Chinese invasion force while systematically grinding the rest of China’s military to dust. (Host) Thanks for laying the groundwork for this conversation. So what I would like to hear from you is how would you analyze these arguments? (Cliff) Yeah, so to better understand both the McKinney and Harris article and the Chan critique of it, I think it’s useful to examine the decision-making model that is implicit in McKinney and Harris’s argument. Their analysis treats Beijing as a unitary, rational actor that is faced with a choice between two alternatives. It can either use force against Taiwan or it can continue not to. If it chooses not to use force, then Taiwan will continue in its current, unresolved state. In addition, however, McKinney and Harris argued that, over time, the likelihood of Taiwan voluntarily agreeing to unification with the mainland is diminishing—and, therefore, that the cost of Beijing of not using force against Taiwan is, in fact, gradually increasing over time. On the other hand, if Beijing chooses to use force against Taiwan, then there’s two possible outcomes. It could, of course, fail, in which case Beijing would be worse off than before because not only would Taiwan remain independent, but China would also have incurred the human and material costs of fighting and losing a war. If the use of force succeeded, however, then they assume Beijing would be better off because the benefits of conquering Taiwan would outweigh the costs of the war fought to achieve that. They argue that, up until now, Beijing has been deterred from using force against Taiwan because of the likelihood that the United States would intervene on Taiwan’s side and defeat China’s efforts. Thus, from Beijing’s point of view, the expected costs of using force against Taiwan have exceeded the costs of not using force. Since they do not believe it is feasible to restore the military balance in the favor of the United States and Taiwan so that a Chinese use of force against Taiwan would likely fail, they now propose a strategy to raise the cost of even a successful use of force against Taiwan, while reducing the cost of not using force against Taiwan, so that Beijing’s rational choice will continue to be to not use force against Taiwan. From the perspective of this model of China’s decision making, Chan’s critique is essentially that McKinney and Harris’s recommendations will not significantly increase the cost of Beijing of a use of force against Taiwan, nor will they reduce the cost of Beijing of not using force against Taiwan. His proposed alternative is to ensure that a use of force against Taiwan will fail and, simultaneously, to increase the cost of China’s ruling party of a use of force against Taiwan by threatening to destroy China’s military at the same time. (Host) Where do you fall on this topic? Do you favor one perspective over the other? (Cliff) Well, I partially agree with Chan’s critique, but I think he overlooks some important issues, and I think his proposed alternative is problematic. And although I don’t entirely agree with their recommended strategy, I think McKinney and Harris’s recommendations have some value. So let me start with the part of Chan’s critique that I agree with. The value of China’s exports to just two countries, the United States and Japan, is more than $600 billion a year. That’s nearly 5 percent of China’s total economy. If China went to war with the United States, and possibly Japan, over Taiwan, it is highly unlikely that the US and Japan would continue to trade with China. And other countries, such as those in the European Union, might impose trade embargoes on China as well. Regional war would also cause massive disruption to other countries’ trade with China as well as to investment and technology flows into China. Compared to all these costs, the additional cost of Beijing of efforts to specifically destroy Taiwan and mainland China semiconductors industries would seem to be relatively minor, and, therefore, I agree with Chan that this is unlikely to affect Beijing’s calculations in a dramatic way. I also agree with him that McKinney and Harris’s recommendations for reducing the cost to China of not using force against Taiwan are already US policies, and, therefore, nothing they propose would actually reduce Beijing’s perceived costs of not using force against Taiwan over what is currently being done. There are, however, two even more fundamental problems with McKinney and Harris’s analysis. The first one is implied by my depiction of it as a one based on a unitary, rational actor, and that is the idea of treating a country as a unitary, rational actor. Now this is a valid approach when looking at individual people, but countries and governments are collective actors, and collective actors behave in ways that would not be considered rational for an individual person. This has been proven at the theoretical level by the economist Kenneth Arrow, and even a cursory observation of the behavior of countries in the real world confirms that this is true. National leaders are constantly making decisions that are clearly not in the best overall interests of their nations. In this specific case of China, China’s leaders have repeatedly shown their willingness to do anything to maintain their hold on power, no matter how damaging those actions are for the Chinese nation as a whole. Nowadays, the legitimacy of the Communist Party of China and its top leader, Xi Jinping, rests on two pillars. One is ever-improving standards of living for the Chinese people, and the other is restoring China to what is seen as its rightful place, as one of the dominant civilizations of the world. Key to the second pillar is recovering those territories that China lost during its period of weakness during the nineteenth and early twentieth century—most especially, Taiwan. If the party or its top leader is seen as failing at either of these two tasks, then they are at risk of being pushed aside and replaced by someone who is believed can achieve them. And Xi and the rest of the communist party leadership are keenly aware of this reality. If something were to occur that signified the possibility of the permanent and irreversible loss of Taiwan, therefore, China’s leaders would be willing to pay almost any cost to prevent that from happening. And this gets to the second fundamental flaw with the unitary, rational actor approach to predicting China’s external behavior, which is that it assumes that the costs and benefits for national leaders are purely material and, therefore, can be objectively calculated by an external observer. But both of those assumptions are incorrect when it comes to China’s policy toward Taiwan. China already enjoys virtually all of the material benefits that unification with Taiwan would convey. People travel freely between Taiwan and mainland China, and trade and investment across the Taiwan Strait are virtually unrestricted. China is not currently able to base military forces on Taiwan, which creates something of a strategic disadvantage for it. But, in fact, in its promises regarding unification to Taiwan, Beijing has said that it would not station military forces in Taiwan so long as Taiwan voluntarily accepts unification. The value to Beijing of formal political unification with Taiwan, therefore, would be almost entirely symbolic. And whichever leader brought that about could be confident of going down in history as a hero of the Chinese nation. Under these circumstances, it is simply not possible to objectively calculate what material price Beijing would or wouldn’t be willing to pay in order to achieve the goal of unification. (Host) So what would you recommend? (Cliff) McKinney and Harris’ proposal, as I said, is not without merit. It should be taken seriously. Although Chan makes a number of arguments as to why it might not be practical, anything that raises the cost to Beijing and using force against Taiwan can only contribute to deterring it from doing so. It would be foolish, however, to rely solely on a strategy of punishment for deterring Chinese use of force against Taiwan. And that’s where I part company with them. I also disagree with their assessment, moreover, that China already possesses the capability to invade and conquer Taiwan. In an analysis I did for a book on the Chinese military published by the Cambridge University Press in 2015, I concluded that it would not be possible, in fact, in the near term, for China to do that. And I disagree that maintaining the US capability to prevent a successful invasion of Taiwan would require an all-out arms race with China. It would, however, require focused and determined efforts that concentrate on key capabilities and their enablers, not simply on fielding large numbers of ever more advanced ships, aircraft, and other military technologies. I should also say, though, that I disagree with Chan’s prescription for deterring China, which is to threaten to grind China’s military to dust. US military planning should be focused purely on deterrence by denial, being able to thwart any Chinese effort to use military force to compel Taiwan to unify with the mainland. To threaten the survival of the Chinese regime in response to an attack on Taiwan would be hugely escalatory and could bring about just the type of all-out war that McKinney and Harris’s strategy attempts to avoid. Moreover, I don’t think it’s necessary to deter Beijing, so long as we maintain the capability to prevent it from forcibly unifying with Taiwan. (Host) Roger, you’ve really added an extra layer of insight into this topic. (Cliff) My pleasure, it’s a very interesting and provocative article, and it’s an important topic that deserves debate, discussion, and analysis. (Host) If you enjoyed this episode of Conversations on Strategy and would like to hear more, you can find us on any major podcast platform.
Conversations on Strategy Podcast – Ep 27 – COL Eric Hartunian On The Annual Estimate of the Strategic Security Environment
Nov 22 2023
Conversations on Strategy Podcast – Ep 27 – COL Eric Hartunian On The Annual Estimate of the Strategic Security Environment
The Annual Estimate of the Strategic Security Environment serves as a guide for academics and practitioners in the defense community on the current challenges and opportunities in the strategic environment. This year’s publication outlines key strategic issues across the four broad themes of Regional Challenges and Opportunities, Domestic Challenges, Institutional Challenges, and Domains Impacting US Strategic Advantage. These themes represent a wide range of topics affecting national security and provide a global assessment of the strategic environment to help focus the defense community on research and publication. Strategic competition with the People’s Republic of China and the implications of Russia’s invasion of Ukraine remain dominant challenges to US national security interests across the globe. However, the evolving security environment also presents new and unconventional threats, such as cyberattacks, terrorism, transnational crime, and the implications of rapid technological advancements in fields such as artificial intelligence. At the same time, the US faces domestic and institutional challenges in the form of recruiting and retention shortfalls in the all-volunteer force, the prospect of contested logistics in large-scale combat operations, and the health of the US Defense Industrial Base. Furthermore, rapidly evolving security landscapes in the Arctic region and the space domain pose unique potential challenges to the Army’s strategic advantage. Read the 2023 Annual Estimate of the Strategic Security Environment: https://press.armywarcollege.edu/monographs/962/ Keywords: Asia, Indo-Pacific, Europe, Middle East, North Africa
Conversations on Strategy Podcast – Ep 21 – C. Anthony Pfaff and Christopher J. Lowrance – Trusting AI: Integrating Artificial Intelligence into the Army’s Professional Expert Knowledge
Jul 12 2023
Conversations on Strategy Podcast – Ep 21 – C. Anthony Pfaff and Christopher J. Lowrance – Trusting AI: Integrating Artificial Intelligence into the Army’s Professional Expert Knowledge
Integrating artificially intelligent technologies for military purposes poses a special challenge. In previous arms races, such as the race to atomic bomb technology during World War II, expertise resided within the Department of Defense. But in the artificial intelligence (AI) arms race, expertise dwells mostly within industry and academia. Also, unlike the development of the bomb, effective employment of AI technology cannot be relegated to a few specialists; almost everyone will have to develop some level of AI and data literacy. Complicating matters is AI-driven systems can be a “black box” in that humans may not be able to explain some output, much less be held accountable for its consequences. This inability to explain coupled with the cession to a machine of some functions normally performed by humans risks the relinquishment of some jurisdiction and, consequently, autonomy to those outside the profession. Ceding jurisdiction could impact the American people’s trust in their military and, thus, its professional standing. To avoid these outcomes, creating and maintaining trust requires integrating knowledge of AI and data science into the military’s professional expertise. This knowledge covers both AI technology and how its use impacts command responsibility; talent management; governance; and the military’s relationship with the US government, the private sector, and society. Read the monograph: https://press.armywarcollege.edu/monographs/959/ Keywords: artificial intelligence (AI), data science, lethal targeting, professional expert knowledge, talent management, ethical AI, civil-military relations Episode transcript: Trusting AI: Integrating Artificial Intelligence into the Army’s Professional Expert Knowledge Stephanie Crider (Host) You’re listening to Conversations on Strategy. The views and opinions expressed in this podcast are those of the authors and are not necessarily those of the Department of the Army, the US Army War College, or any other agency of the US government. Joining me today are Doctor C. Anthony Pfaff and Colonel Christopher J. Lowrance, coauthors of Trusting AI: Integrating Artificial Intelligence into the Army’s Professional Expert Knowledge with Brie Washburn and Brett Carey. Pfaff, a retired US Army colonel, is the research professor for strategy, the military profession, and ethics at the US Army War College Strategic Studies Institute and a senior nonresident fellow at the Atlantic Council. Colonel Christopher J. Lowrance is the chief autonomous systems engineer at the US Army Artificial Intelligence Integration Center. Your monograph notes that AI literacy is critical to future military readiness. Give us your working definition of AI literacy, please. Dr. C. Anthony Pfaff AI literacy is more aimed at our human operators (and that means commanders and staffs, as well as, you know, the operators themselves) able to employ these systems in a way that not only we can optimize the advantage these systems promise but also be accountable for their output. That requires knowing things about how data is properly curated. It will include knowing things about how algorithms work, but, of course, not everyone can become an AI engineer. So, we have to kind of figure out at whatever level, given whatever tasks you have, what do you need to know for these kinds of operations to be intelligent? Col. Christopher J. Lowrance I think a big part of it is going to be also educating the workforce. And that goes all the way from senior leaders down to the users of the systems. And so, a critical part of it is understanding how best AI-enabled systems can fit in, their appropriate roles that they can play, and how best they can team or augment soldiers as they complete their task. And so, with that, that’s going to take senior leader education coupled with different levels of technical expertise within the force, especially when it comes to employing and maintaining these types of systems, as well as down to the user that’s going to have to provide some level of feedback to the system as it’s being employed. Host Tell me about some of the challenges of integrating AI and data technologies. Pfaff What we tried to do is sort of look at it from a professional perspective. And from that perspective, so I’ll talk maybe a little bit more later, but, you know, in many ways there are lots of aspects of the challenge that aren’t really that different. We brought on tanks, airplanes, and submarines that all required new knowledge that not only led to changes in how we fight wars and the character of war but corresponding changes to doctrine organizational culture, which we’re seeing with AI. We’ve even seen some of the issues that AI brings up before when we introduce automated technology, which, in reducing the cognitive load on operators introduces concerns like accountability gaps and automation biases that arise because humans are just trusting the machine or don’t understand how the machine is working or how to do the process manually, and, as a result, they’re not able to assess its output. The paradigm example of that, of course, is the USS Vincennes incident, where you have an automated system. Even though there was plenty of information that it was giving that should have caused a human operator not to permit shooting down what ended up being a civilian airliner. So, we’ve dealt with that in the past. AI kind of puts that on steroids. Two of the challenges that I think that are unique to AI, with data-driven systems, they actually can change in capabilities as you use them. For instance, a system that starts off able to identify, perhaps, a few high-value targets, over time, as it collects more data, gets more questions. And as humans see patterns, or as a machine identifies patterns, and humans ask the machine to test it, you’re able to start discerning properties of organizations, both friendly and enemy, you wouldn’t have seen before. And that allows for greater prediction. What that means is that the same system, used in different places with different people with different tasks, are going to be different systems and have different capabilities over time. The other thing that I think is happening is the way it’s changing how we’re able to view the battlefield. Rather than a cycle of Intel driving OPS, driving Intel and so on, with the right kind of sensors in place, getting us the right kind of data, we’re able to get more of a real-time picture. The intel side can make assessments based on friendly situations, and the friendly can make targeting decisions and assessments about their own situation based on intel. So, that’s coming together in ways that are also pretty interesting, and I don’t think we fully wrestled with yet. Lowrance Yeah, just to echo a couple of things that Dr. Pfaff has alluded to here is that, you know, overarching, I think the challenge is gaining trust in the system. And trust has really earned. And it’s earned through use is one aspect. But you’ve got to walk in being informed, and that’s where the data literacy and the AI literacy piece comes in. And as Dr. Pfaff mentioned, these data-driven systems, generally speaking, will perform based on the type of data that they’ve been trained against and those types of scenarios in which that data was collected. And so, one of the big challenge areas is the adaptation over time. But they are teachable, so to speak. So, as you collect and curate new data examples, you can better inform the systems of how they should adapt over time. And that’s going to be really key to gaining trust. And that’s where the users and the commanders of these systems need to understand some of the limitations of the platforms, their strengths, and understanding also how to retrain or reteach to systems over time using new data so that they can more quickly adapt. But there’s definitely some technical barriers to gaining trust, but they certainly can be overcome with the proper approach. Host What else should we consider, then, when it comes to developing trustworthy AI? Pfaff We’ve kind of taken this from the professional perspective, and so we’re starting with an understanding of professions that a profession entails specialized knowledge that’s in service to some social good that allows professionals to exercise autonomy over specific jurisdictions. An example, of course, would be doctors and the medical profession. They have specialized knowledge. They are certified in it by other doctors. They’re able to make medical decisions without nonprofessionals being able to override those. So, the military is the same thing, where we have a particular expertise. And then the question is, how does the introduction of AI affect what counts as expert knowledge? Because that is the core functional imperative of the profession—that is able to provide that service. In that regard, you’re going to look at the system. We need to be able to know, as professionals, if the system is effective. It also is predictable and understandable. I am able to replicate results and understand the ones that I get. We also have to trust the professional. That means the professional has to be certified. And the big question is, as Chris alluded to, in what? But not just certified in the knowledge, but also responsible norms and accountable. The reason for that is clients rely on professionals because they don’t have this knowledge themselves. Generally speaking, the client’s not in the position to judge whether or not that diagnosis, for example, is good or not. They can go out and find another opinion, but they’re going out to go seek another profession. So, clients not only need to trust the expert knows what they’re doing but there’s an ethics that governs them and that they are accountable. Finally, to trust the profession as an institution—that it actually has what’s required to conduct the right kinds of certification, as well as the institutions required to hold professionals accountable. So that’s the big overarching framework in which we’re trying to take up the differences and challenges that AI provides. Lowrance Like I mentioned earlier, I think it’s about also getting the soldiers and commanders involved early during the development process and gaining that invaluable feedback. So, it’s kind of an incremental rollout, potentially, of AI-enabled systems is one aspect, or way of looking at it. And so that way you can start to gauge and get a better appreciation and understanding of the strengths of AI and how best it can team with commanders and soldiers as they employ the systems. And that teaming can be adaptive. And I think it’s really important for commanders and soldiers to feel like they can have some level of control of how best to employ AI-enabled systems and some degree of mechanism, let’s say, how much they’re willing to trust at a given moment or instance for the AI system to perform a particular function based on the conditions. As we know as military leaders, the environment can be very dynamic, and conditions change. If you look at the scale of operations from counterinsurgency to a large-scale combat operation, you know those are different ends of a spectrum here of types of conflicts that might be potentially faced by our commanders and our soldiers on the ground with AI-enabled systems. And so, they need to adapt and have some level of control and different trusts of the system based on understanding that system, its limitations, its strengths, and so on. Host You touched on barriers just a moment ago. Can you expand a little bit more on that piece of it? Lowrance Often times when you look at it from a perspective of machine-learning applications, these are algorithms where the system is able to ingest data examples. So basically, historical examples of conditions of past events. And so, just to make this a little bit more tangible, think of an object recognition algorithm that can look at imagery and that (maybe it’s geospatial imagery for satellites that have taken an aerial photo of the ground plane) you could train it to look for certain objects like airplanes. Well, over time, the AI learns to look for these based on the features of these examples within past imagery. With that, sometimes if you take that type of example data and the conditions of the environment change, maybe it’s the backdrop or maybe it’s a different airstrip or different type of airplane or something changes, then performance can degrade to some degree. And this goes back to adaptability. How do these algorithms best adapt? This goes back to the teaming aspect of having users working with the AI recognizing when that performance is starting to degrade, to some degree, kind of through a checks-and-balances type of system. And then you give feedback by curating new examples and having the system adapt. I think giving the soldiers/commanders, for instance, the old analogy of a baseball card with performance statistics of a particular player, where you would have a baseball card for a particular AI-enabled system, giving you the types of training statistics. For example, what kind of scenario was this system trained for? What kind of data examples? How many data examples and so on, and that would give commanders and operators a better sense of these strengths and limitations of the systems, where and under what conditions has it been tested and evaluated. And, therefore, when it’s employed in a condition that doesn’t necessarily meet those kinds of conditions, then that’s an early cue to be more cautious . . . to take a more aggressive teaming stance with the system and checking more rigorously, obviously, what the AI is potentially predicting or recommending to the soldiers and operators. And that’s one example. I think you’ve got to have the context where, most instances, depending on the type of AI application, if you will, really drives how much control or task effort you’re going to give to the AI system. In some instances, as we see on the commercial sector today, there’s a high degree of autonomy given to some AI systems that are recommending, for instance, what you maybe want to purchase or what movie you should shop for and so on, but what’s the risk of employing that type of system or if that system makes a mistake? And I think that’s really important is the context here and then having the right precautions and the right level of teaming in place when you’re going into those more risky types of situations. And I think another final point of the barriers to help overcome them is, again, going back to this notion of giving commanders and soldiers some degree of control over the system. A good analogy is like a rheostat knob. Based on the conditions on the ground. Based on their past use of this system and their understanding, they start to gain an understanding of the strengths and limitations of the system and then, based on the conditions, can really dial up or dial down the degree of autonomy that they’re willing to grant the system. And I think this is another way of overcoming barriers to, let’s say, highly restricting the use of AI-enabled systems, especially when they’re recognizing targets or threats as part of the targeting cycle, and that’s one of the lenses that we looked at in this particular study. Pfaff When we’re looking at expert knowledge, we break it into four components—the technical part, which we’ve covered. But we also look at, to have that profession, professionals have to engage in human development, which means recruiting the right kinds of people, training and educating the right kinds of ways, and then develop them over a career to be leaders in the field. And we’ve already talked about the importance of having norms that ensure the trust of the client. Then there’s the political, which stresses mostly how the professions maintain legitimacy and compete for jurisdiction with other professions. (These are) all issues that AI brings up. So those introduce a number of other kinds of concerns that you have to be able to take into account for any of the kinds of things that Chris talked about for us to be able to do that. So, I would say growing the institution along those four avenues that I talked about represents a set of barriers that need to be overcome. Host Let’s talk about ethics and politics in relation to AI in the military. What do we need to consider here? Pfaff It’s about the trust of the client, but that needs to be amplified a little bit. What’s the client trusting us to do? Not only use this knowledge on their behalf, but also the way that reflects their values. That means systems that conform to the law of armed conflict. Systems that enable humane and humanitarian decision making—even in high intensity combat. The big concerns there, (include) the issue(s) of accountability and automation bias. Accountability arises because there’s only so much you’re going to be able to understand about the system as a whole. And when we’re talking about the system, it’s not just the data and the algorithms, it’s the whole thing, from sensors to operators. So, it will always be a little bit of a black box. If you don’t understand what’s going on, or if you get rushed (and war does come with a sense of urgency) you’re going to be tempted to go with the results the machine produces. Our recommendation is to create some kind of interface. We use the idea of fuzzy logic that allows the system and humans to interact with it to identify specific targets in multiple sets. The idea was . . . given any particular risk tolerance the commander has because machines when they produce these outputs, they assign a probability to it . . . so for example, if it identifies a tank, it will say something to the effect of “80% tank.” So, if I have a high-risk tolerance for potential collateral harms, risk emission, or whatever, and I have a very high confidence that the target I’m about to shoot as legitimate, I can let the machine do more of the work. And with a fuzzy logic controller, you can use that to determine where in the system humans need to intervene when that risk tolerance changes or that confidence changes. And this addresses accountability because it specifies what commander, staff, and operator are accountable for—getting the risk assessment right, as well as ensuring that the data is properly curated and the algorithms trained. It helps with automation bias because the machine’s telling you what level of confidence it has. So, it’s giving you prompts to recheck it should there be any kinds of doubts. And one of the ways you can enhance that, that we talked about in the monograph, is in addition to looking for things that you want to shoot, also look for things you don’t want to shoot. That’ll paint a better picture of the environment, (and) overall reduce the kind of risk of using these systems. Now when it comes to politics, you’ve got a couple of issues here. One is at the level of civ-mil relations. And Peter Singer brought this up 10 years ago when talking about drones. His concern was that drone operation would be better done by private-sector contractors. As we rely more on drones, what it came to mean in applying military force would largely be taken over by contractors and, thus, expert knowledge leaves the profession and goes somewhere else. And that’s going to undermine the credibility and legitimacy of the profession with political implications. That didn’t exactly happen because military operators always retained the ability to do this. The only ones who are authorized to use these systems with lethal force. There were some contractors augmenting them, but with AI right now, as we sort through what the private sector/government roles and expertise is going to be, we have a situation where you could end up . . . one strategy of doing this is that the military expert knowledge doesn’t change, all the data science algorithms are going on on the other side of an interface where the interface just presents information that the military operator needs to know, and he responds to that information without really completely understanding how it got there in the first place. I think that’s a concern because that is when expertise migrates outside the profession. It also puts the operators, commanders, and staffs in a position where (A.) they will not necessarily be able to assess the results well without some level of understanding. They also won’t be able to optimize the system as its capabilities develop over time. We want to be careful about that because, in the end, the big thing in this issue is expectation management. Because these are risk-reducing technologies . . . because they’re more precise, they lower risk to friendly soldiers, as well as civilians and so on. So, we want them to make sure that we are able to set the right kinds of expectations, which will be a thing senior militaries have to do. And regarding the effectives of the technology, so civilian leaders don’t over rely on it, and the public doesn’t become frustrated by lack of results when it doesn’t quite work out. Because the military, they can’t deliver results but also imposes any risk to soldiers and noncombatants alike is not one that’s probably going to be trusted. Lowrance Regarding ethics and politics and relations to AI and the military, I think it’s really important, obviously, throughout the development cycle of an AI system, that you’re taking these types of considerations in early and, obviously, often. So, I know one guiding principle that we have here is that if you break down an AI system across a stack all the way from the hardware to the data to the model and then to deployment in the application, really ethics wraps all of that. So, it’s really important that the guiding principles already set forth through various documents from DoD and the Army regarding responsible AI and employment that that is followed in the hereto. Now, in terms of what we looked at from the paper, from the political lens, it’s an interesting dynamic when you start looking at the interaction between the employment of these systems. And really from the sense of, let’s say, of urgency of at least leveraging this technology from either a bottom-up or a top-down type of fashion. So, what I mean by that is from a research and development perspective, you know, there’s an S and T (or science and technology) base that really leads the armies—and really DoD if you look outside from a joint perspective the development of new systems. But yet, as you know, the commercial sector is leveraging AI now, today, and sometimes there’s a sense of urgency. It’s like, hey, it’s mature enough in these types of aspects. Let’s go ahead and start leveraging it. And so, a more deliberate approach would be traditional rollout through the S and T environment where it goes through rigorous test and evaluation processes and then eventually becomes a program of record and then deployed and fielded. Whereas it doesn’t necessarily prohibit a unit right now that obviously says, “Hey, I can take this commercial off-the-shelf AI system and start leveraging it and go ahead and get some early experience.” So, I think there’s this interesting aspect between the traditional program of record acquisition effort versus this kind of bottom-up unit level experimentation and how those are blending together. And it also brings up the role, I think, of soldiers and, let’s say, contractors play in terms of developing and eventually deploying and employing AI-enabled systems. You know, inherently AI-enabled systems are complex, and so who has the requisite skills to sustain, update, and adapt these systems over time? Is it the contractor, or should it be the soldiers? And where does that take place? We’ve looked at different aspects of this in this study, and there’s probably a combination, a hybrid. But one part of the study is we talked about the workforce development program and how important that is because in tactical field environments, you’re not necessarily always going to be able to have contractors out present in these field sites. Nor are you going to have, always, the luxury of high bandwidth communications out to the tactical edge where these AI-enabled systems are being employed. Because of that, you’re going to have to have the ability to have that technical knowledge of updating and adapting AI-enabled systems with the soldiers. That’s one thing we definitely emphasized as part of the study of these kinds of relationships. Host Would you like to share any final thoughts before we go? Lowrance One thing I would just like to reemphasize again is this ability that we can overcome some of these technical barriers that we discussed throughout the paper. But we can do so deliberately, obviously, and responsibly. Part of that is, we think, and this is what one of our big findings from our study is, that from taking an adaptive teaming approach. We know that AI inherently, and especially in a targeting cycle application, is an augmentation tool. It’s going to be paired with soldiers. It’s not going to be just running autonomously by itself. What does that teaming look like? It goes back to this notion of giving control down to the commander level, and that’s where that trust is going to start to come in, where if the commander on the ground knows that he can change the system behavior, or change that teaming aspect that is taking place, and the level of teaming, that inherently is going to grow the amount of trust that he or she has in the system during its application. We briefly talked a little bit about that, but I just want to echo, or reinforce, that. And it’s this concept of an explainable fuzzy logic controller. And the big two inputs to that controller are what is the risk tolerance of the commander based on the conditions of the ground, whether it’s counterinsurgency or large-scale combat operations versus what the AI system is telling them, Generally speaking, in most predictive applications, the AI has some degree of confidence score associated with its prediction or recommendation. So, leverage that. And leverage the combination of those. And that should give you an indication of how much trust or how much teaming, in other words, you know, for a given function or role, should take place with this AI augmentation and between the soldier and the actual AI augmentation tool that’s taking place. This can be broken down, obviously, in stages just like the targeting cycle is. And our targeting cycle and joint doctrine is, for dynamic targeting, as F2T2 EA. Find fix, track, target, engage, and assess. And each one of those, obviously more some than others, is where AI can play a constructive role. We can employ it in a role where we’re doing so responsibly and it’s providing an advantage, in some instances augmenting the soldiers in such a way that really exceeds the performance a human alone could do. And that deals with speed, for example. Or finding those really hidden types of targets, these kinds of things that would be even difficult for human to do alone. Taking that adaptive teaming lens is going to be really important moving forward. Pfaff When it comes to employing AI, particularly for military purposes, there’s a concern that the sense of urgency that comes with combat operations will overwhelm the human ability to control the machine. We will always want to rely on the speed. And like Chris said, you don’t get the best performance out of the machine that way. It really is all about teaming. And none of the barriers that we talked about, none of the challenges we talked about, are even remotely insurmountable. But these are the kinds of things you have to pay attention to. There is a learning curve, and to engage in strategies that minimize the amount of adaptation members of the military going to have to perform, I think it will be a mistake in the long term even to get short-term results. Host Listeners, you can learn more about this, if you want to really dig into the details here, you can download the monograph at press.armywarcollege.edu/monographs/959. Dr. Pfaff, Col. Lowrance, thank you so much for your time today. Pfaff Thank you, Stephanie. It’s great to be here. Host If you enjoyed this episode and would like to hear more, you can find us on any major podcast platform. About the Project Director Dr. C. Anthony Pfaff (colonel, US Army retired) is the research professor for strategy, the military profession, and ethics at the US Army War College Strategic Studies Institute and a senior nonresident fellow at the Atlantic Council. He is the author of several articles on ethics and disruptive technologies, such as “The Ethics of Acquiring Disruptive Military Technologies,” published in the Texas National Security Review. Pfaff holds a bachelor’s degree in philosophy and economics from Washington and Lee University, a master’s degree in philosophy from Stanford University (with a concentration in philosophy of science), a master’s degree in national resource management from the Dwight D. Eisenhower School for National Security and Resource Strategy, and a doctorate degree in philosophy from Georgetown University. About the Researchers Lieutenant Colonel Christopher J. Lowrance is the chief autonomous systems engineer at the US Army Artificial Intelligence Integration Center. He holds a doctorate degree in computer science and engineering from the University of Louisville, a master’s degree in electrical engineering from The George Washington University, a master’s degree in strategic studies from the US Army War College, and a bachelor’s degree in electrical engineering from the Virginia Military Institute. Lieutenant Colonel Bre M. Washburn is a US Army military intelligence officer with over 19 years serving in tactical, operational, and strategic units. Her interests include development and mentorship; diversity, equity, and inclusion; and the digital transformation of Army intelligence forces. Washburn is a 2003 graduate of the United States Military Academy and a Marshall and Harry S. Truman scholar. She holds master’s degrees in international security studies, national security studies, and war studies. Lieutenant Colonel Brett A. Carey, US Army, is a nuclear and counter weapons of mass destruction (functional area 52) officer with more than 33 years of service, including 15 years as an explosive ordnance disposal technician, both enlisted and officer. He is an action officer at the Office of the Under Secretary of Defense for Policy (homeland defense integration and defense support of civil authorities). He holds a master of science degree in mechanical engineering with a specialization in explosives engineering from the New Mexico Institute of Mining and Technology.
Conversations on Strategy Podcast – Ep 22 – Paul Scharre and Robert J. Sparrow – AI: Centaurs Versus Minotaurs—Who Is in Charge?
Jul 12 2023
Conversations on Strategy Podcast – Ep 22 – Paul Scharre and Robert J. Sparrow – AI: Centaurs Versus Minotaurs—Who Is in Charge?
Who is in charge when it comes to AI? People or machines? In this episode, Paul Scharre, author of the books Army of None: Autonomous Weapons and the Future of War and the award-winning Four Battlegrounds: Power in the Age of Artificial Intelligence, and Robert Sparrow, coauthor with Adam Henschke of “Minotaurs, Not Centaurs: The Future of Manned-Unmanned Teaming” that was featured in the Spring 2023 issue of Parameters, discuss AI and its future military implications. Read the article: https://press.armywarcollege.edu/parameters/vol53/iss1/14/ Keywords: artificial intelligence (AI), data science, lethal targeting, professional expert knowledge, talent management, ethical AI, civil-military relations Episode transcript: AI: Centaurs Versus Minotaurs: Who Is in Charge? Stephanie Crider (Host) The views and opinions expressed in this podcast are those of the authors and are not necessarily those of the Department of the Army, the US Army War College, or any other agency of the US government. You’re listening to Conversations on Strategy. I’m talking with Paul Scharre and Professor Rob Sparrow today. Scharre is the author of Army of None: Autonomous Weapons in the Future of War, and Four Battlegrounds: Power in the Age of Artificial Intelligence. He’s the vice president and director of studies at the Center for a New American Security. Sparrow is co-author with Adam Henschke of “Minotaurs, Not Centaurs: The Future of Manned-Unmanned Teaming,” which was featured in the Spring 2023 issue of Parameters. Sparrow is a professor in the philosophy program at Monash University, Australia, where he works on ethical issues raised by new technologies. Welcome to Conversations on Strategy. Thanks for being here today. Paul Scharre Absolutely. Thank you. Host Paul, you talk about centaur warfighting in your work. Rob and Adam re-envisioned that model in their article. What exactly is centaur warfighting? Scharre Well, thanks for asking, and I’m very excited to join this conversation with you and with Rob on this topic. The idea really is that as we see increased capabilities in artificial intelligence and autonomous systems that rather than thinking about machines operating on their own that we should be thinking about humans and machines as part of a joint cognitive system working together. And the metaphor here is the idea of a centaur, the mythical creature of a 1/2 human 1/2 horse, with the human on top—the head and the torso of a human and then the body of a horse. You know, there’s, like, a helpful metaphor to think about combining humans and machines working to solve problems using the best of both human and machine intelligence. That’s the goal. Host Rob, you see AI being used differently. What’s your perspective on this topic? Robert Sparrow So, I think it’s absolutely right to be talking about human-machine or manned-unmanned teaming. I do think that we will see teams of artificial intelligence as robots and human beings working and fighting together in the future. I’m less confident that the human being will always be in charge. And I think the image of the ccentaur is kind of reassuring to people working in the military because it says, “Look, you’ll get to do the things that you love and think are most important. You’ll get to be in charge, and you’ll get the robots to do the grunt work.” And, actually, when we look at how human beings and machines collaborate in civilian life, we actually often find it’s the other way around. (It) turns out that machines are quite good at planning and calculating and cognitive skills. They’re very weak at interactions with the physical world. Nowadays, if you, say, ask ChatGPT to write you a set of orders to deploy troops it can probably do a passable job at that just by cannibalizing existing texts online. But if you want a machine to go over there and empty that wastepaper basket, the robot simply can’t do it. So, I think the future of manned-unmanned teaming might actually be computers, with AI systems issuing orders. Or maybe advice that has the moral force of orders are two teams of human beings. Adam and I have proffered the image of the Minotaur, which was the mythical creature with the head of a bull and the body of a man as an alternative to the centaur, when we’re thinking about the future of manned-unmanned teaming. Host Paul, do you care to respond? Scharre I think it’s a great paper and I would encourage people to check it out, “Minotaurs, Not Centaurs.” And it’s a really compelling image. Maybe the humans aren’t on top. Maybe the humans are on the bottom, and we have this other creature that’s making the decisions, and we’re just the body taking the actions. (It’s) kind of creepy, the idea of maybe we’re headed towards this role of minotaurs instead, and we’re just doing the bidding of the machines. You know, a few years ago, I think a lot of people envisioned the type of tasks that AI would be offloading, would be low-skill tasks, particularly for physical labor. So, a lot of the concern was like autonomousness was gonna put truck drivers out of work. It turns out, maneuvering things in the physical world is really hard for machines. And, in fact, we’ve seen with progress in large language models in just the last few years, ChatGPT or the newest version (GPT-4), that they’re quite good at lower-level skills of cognitive labor so that they can do a lot of the tasks that maybe an intern might do in a white-collar job environment, and they’re passable. And as he’s pointing out, ask a robot to throw out a trash basket for you or to make a pot of coffee . . . it’s not any good at doing that. But if you said, “Hey, write a short essay about human-machine teaming in the military environment,” it’s not that bad. And that’s pretty wild. And I think sometimes these models have been criticized . . . people say, “Well, they’re just sort of like shuffling words around.” It’s not. It’s doing more than that. Some of the outputs are just garbage, but (with) some of them, it’s clear that the model does understand, to some extent. It’s always dicey using anthropomorphic terms, but (it can) understand the prompts that you’re giving it, what you’re asking it to do, and can generate output that’s useful. And sometimes it’s vague, but so are people sometimes. And I think that this vision of hey, are we headed towards this world of a minotaur kind of teaming environment is a good concern to raise because presumably that’ not what we want. So then how do we ensure that humans are in charge of the kinds of decisions that we want humans to be responsible for? How do we be intentional about using AI and autonomy, particularly in the military environment? Sparrow I would resist the implication that it’s only really ChatGPT that we should be looking at. I mean, in some ways it’s the history of chess or gaming where we should be looking to the fact that machines outperform all, or at least most, human beings. And the question is if you could develop a warfighting machine for command functions then that wouldn’t necessarily have to be able to write nice sentences. The question is when it comes to some of the functions of battlefield command, whether or not machines can outperform human beings in that role. There’s kind of some applications like threat assessment in aerial warfare, for instance, where the tempo of battle is sufficiently high and there’s lots of things whizzing around in the sky, and we’re already at a point where human beings are relying on machines to at least prioritize tasks for them. And I think, increasingly, it will be a brave human being that overrides the machine and says, “The machine has got this wrong.” We don’t need to be looking at explicit hierarchies or acknowledged hierarchies either. We need to look at how these systems operate in practice. And because of what’s called automation bias, which is the tendency of human beings to defer to machines once their performance reaches a certain point, yeah, I think we’re looking at a future where machines may be effectively carrying out key cognitive tasks. I’m inclined to agree with Paul that there are some things that it is hard to imagine machines doing well. I’m a little bit less confident in my ability to imagine what machines can do well in the future. If you’d asked me two years ago, five years ago, “Will AIs be able to write good philosophy essays?” I would have said, “That’s 30 years off.” Now I can type all my essay questions into ChatGPT and this thing performs better than many of my students. You know, I’m a little bit less confident that we know what the future looks like here, but I take it that the fundamental technology of these generative AI and adversarial neural networks is actually going to be pretty effective when it comes to at least wargaming. And, actually, the issue for command in the future is how well can we feed machines the data that they need to train themselves up in simulation and apply it to the real world? I worry about how we’ll know these things are reliable enough to move forward, but there’s some pretty powerful dynamics in this area where people may effectively be forced to adopt AI command in response to either what the enemy is doing or what they think the enemy is doing. So, not just the latest technology, there’s a whole set of technologies here, and a whole set of dynamics that I think should undercut our confidence that human beings will always be in charge. Host Can you envision a scenario in which centaur and minotaur warfighting might both have a role, or even work in tandem? Sparrow I don’t think it’s all going to be centaurs, but I don’t think it will all be minotaurs. And in some ways, this is a matter of the scale of analysis. If you think about something like Uber, you know, people have this vision of the future of robot taxis. I would get into the robot taxi. And as the human being, I would be in charge of what the machine does. In fact, what we have now is human beings being told by an algorithm where to drive. Even if I were getting into a robot taxi and telling it where to go, for the moment, there’d be a human being in charge of the robot taxi company. And I think at some level, human beings will remain in charge of war as much as human beings are ever in charge of world historical events. But I think for lots of people who are fighting in the future, it will feel as though they’re being ordered around by machines. People will be receiving feeds of various sorts. It will be a very alienating experience, and I think in some contexts they genuinely will be effectively being ordered around by an AI. Interesting things to think about here is how even an autonomous weapons system, which is something that Paul and I have both been concerned about, actually relies on a whole lot of human beings. And so at one level, you hope that a human being is setting the parameters of operations of the autonomous weapons system, but at another level, everyone is just following this thing around and serving its needs. You know, it returns to base and human beings, refuel and maintain it and rearm it. Everyone has to respond to what it does in combat. Even with something like a purportedly autonomous weapons system, zoom out a bit, and what you see as a human is a machine making a core set of warfighting decisions and a whole lot of human beings scurrying around serving the machine. Zoom out more, and you hope that there’s a human being in charge. Now, it depends a little bit on how good real-world wargaming by machines gets, and that’s not something I have a vast amount of access to, how effective AI is in war gaming. Paul may well know more about that. But at that level, if you really had a general officer that was a machine, or even staff taking advice from wargamers from war games then I think most of the military would end up being a minotaur rather than a centaur. Scharre It’s not just ChatGPT and GPT-4, not just large language models. We have seen, as you pointed out, really amazing progress because a whole set of games—chess, poker, computer games like StarCraft 2 and Dota 2. At human level there is sometimes superhuman performance at these games. What they’re really doing is functions that militaries might think of as situational awareness and command and control. Oftentimes when we think about the use of AI or autonomy in a military context, people tend to think about robotics, which has value because you can take a person out of a platform and then maybe make the platform more maneuverable or faster or more stealthy or smaller or more attritable or something else. In these games, the AI agents have access to the same units as the humans do. The AI playing chess has access to the same chess pieces as the humans do. What’s different is the information processing and decision making. So it’s the command and control that’s different. And it’s not just that these AI systems are better. They actually play differently than humans in a whole variety of ways. And so it points to some of these advantages in a work time context. Obviously, real world is a lot more complicated than a chess or Go board game, and there’s just a lot more possibilities and a lot more clever, nefarious things that an adversary can do in the real world. I think we’re going to continue to see progress. I totally agree with Rob that we really couldn’t say where this is going. I mean, I’ve been working on these issues for a long time. I continue to be surprised. I have been particularly surprised in the last year, 18 – 24 months, with some of the progress. GPT-4 has human-level performance on a whole range of cognitive tasks—the SAT, the GRE, the bar exam. It doesn’t do everything that humans can do, but it’s pretty impressive. You know, I think it’s hard to say where things are going going forward, but I do think a core question that we’re going to grapple with in society, in the military and in other contexts, is what tasks should be done by a human and which ones by a machine? And in some cases, the answer to that will be based simply on which one performs better, and there’s some things where you really just care about accuracy and reliability. And if the machine does a better job, if it’s a safer driver, then we could save lives and maybe we should hand over those tasks to machines once machines get there. But there’s lots of other things, particularly, in the military context that touch on more fundamental ethical issues, and Rob touches on many of these in the paper, where we also want to ask the question, are there certain tasks that only humans should do, not because the machines cannot do them but because they should not do them for some reason? Are there some things that require uniquely human judgment? And why is that? And I think that these are going to be difficult things to grapple with going forward. These metaphors can be helpful. Thinking about is it a centaur? Is the human really up top making decisions? Is it more like a minotaur? This algorithm is making decisions and humans are running around and doing stuff . . . we don’t even know why? Gary Kasparov talked about in a recent wonderful book on chess called Game Changer about AlphaZero, the AI chess playing agent. He talks about how, after he lost to IBM’s deep blue in the 90s, Kasparov created this field of human-machine teaming in chess of free-play chess, or what sometimes been called centaur chess, where this idea of centaur warfighting really comes from. And there was a period of time where the best chess players were human-machine teams. And it was better than having humans playing alone or even chess engines playing by themselves. That is no longer the case. The AI systems are now so good at chess that the human does not add any value in chess. The human just gets in the way. And so, Kasparov describes in this book chess shifting to what he calls a shepherd model, where the human is no longer pairing with the chess agent, but the human is choosing the right tool for the job and shepherding these different AI systems and saying, “Oh, we’re playing chess. I’m going to use this chess engine,” or “I’m going to write poetry. I’m going to use this AI model to do that.” And it’s a different kind of model, but I think it’s helpful to think about these different paradigms and then what are the ones that we want to use? You know, we do have choices about how we use the technology. How should that drive our decision making in terms of how we want to employ this technology for various ends? Host What trends do you see in the coming years, and how concerned or confident should we be? Sparrow I think we should be very concerned about maintaining human control over these new technologies, not necessarily the kind of super-intelligent AIs going to eat us all questions that some of my colleagues are concerned about, but, in practice, how much are we exercising what we think of as our core human capacities in our daily roles both in civilian life but also in military life? And how much are we just becoming servants of machines? How can we try to shape the powerful dynamics driving in that direction? And that’s the sort of game-theoretic nature of conflict. Or the fact that, at some level, you really want to win a battle or a war makes it especially hard to carve out space for the kind of moral concerns that both Paul and I think should be central to this debate. Because if your strategic adversary just says, “Look, we’re all in for AI command,” and it turns out that that is actually very effective on the battlefield then it’s gonna be hard to say, “Hang on a moment, that’s really dehumanizing, we don’t like just following the orders of machines.” It’s really important to be having this conversation. It needs to happen at a global level—at multiple levels. One thing that hasn’t come up in our conversation is how I think the performance of machines will actually differ in different domains—the performance of robots, in particular. So, something like war in outer space, it’s all going to be robots. Even undersea warfare, that strikes me, at least the command functions are likely to be all onboard computer systems, or again, or undersea. It’s not just about platforms on the sea. But the things that are lurking in the water are probably going to be controlled by computers. What would it be like to be the mechanic on a undersea platform? You know, there’s someone whose job it is to grease the engines and reload the torpedoes, but, actually, all the combat decisions on the submarine are being made by an onboard computer. That would be a really miserable role to be the one or two people in this tin can under the ocean where the onboard computer is choosing what to engage and when. Aerial combat, again, I think probably manned fighters have a limited future. My guess is that the sort of manned aircraft . . . there are probably not too many more generations left of those. But infantry combat . . . I find that really hard to imagine being handed over to robots for a long time because of how difficult the physical environment is. That’s just to say, this story looks a bit different depending upon where you’re thinking about combat taking place. I do think the metaphors matter. I mean, if you’re going to sell AI to highly trained professionals, what you don’t do is say, “Look, here’s a machine that is better than you at your job. It’s going to do all things you love and put you out of work.” No one turns up and says that. Everybody turns up to the conference and says, “Look, I’ve got this great machine, and it’s going to do all the routine work. And you can concentrate on things that you love.” That’s a sales pitch. And I don’t think that we should be taken in by that. You want people to start talking about AI, take it seriously. And if you go to them saying, “Look, this thing’s just going to wipe out your profession,” That’s a pretty short conversation. But if you take seriously the idea that human beings are always going to be in charge, that also forecloses certain conversations that we need to be having. And the other thing here is how these systems reconfigure social and political relations by stealth. I’m sure there are people in the military now who are using ChatGPT or GPT-4 for routine correspondence, which includes things that’s actually quite important. So, even if the bureaucracy said, “Look, no AI.” If people start to rely on it in their daily practice, it’ll seep into the bureaucracy. I mean, in some ways, these systems, they’re technocratic, through and through. And so, they appeal to a certain sort of bureaucracy. And a certain sort of society loves the idea that all we need is good engineers and then all hard choices will be made by machines, and we can absolve ourselves of responsibility. There’s multiple cultural and political dynamics here that we should be paying attention to. And some of them, I suspect, likely to fly beneath the radar, which is why I hope this conversation and others like it will draw people’s attention to this challenge. Scharre One of the really interesting questions in my mind, and I’d be interested in your thoughts on this, Rob, is how do we balance this tension between efficacy of decision making and where do we want humans to sit in terms of the proper rule? And I think it’s particularly acute in a military context. When I hear the term “minotaur warfighting,” I think, like, oh, that does not sound like a good thing. You talk in your paper about some of the ethical implications, and I come away a little bit like, OK, so is this something that we should be pursuing because we think it’s going to be more effective, or we should be running away from and this is like a warning. Like, hey, if we’re not careful, we’re all gonna turn into these minotaurs and be running around listening to these AI systems. We’re gonna lose control over the things that we should be in charge of. But, of course, there’s this tension of if you’re not effective on the battlefield, you could lose everything. In the wartime context, it’s even more compelling than some business—some business doesn’t use the technology in the right way or it’s not effective or it doesn’t improve the processes, OK. They go out of business. If a country does not invest in their national defense, they could cease to exist as a nation. And so how do we balance some of these needs? Are there some things that we should be keeping in mind as the technology is progressing and we’re sort of looking at these choices of do we use the system in this way or that way to kind of help guide these decisions? Sparrow 10 years ago, everyone was going home on autonomy. It was all going to be autonomous. And I started asking people, “Would you be willing to build your next set of submarines with no space for human beings on board? Let’s go for an unmanned submersible fleet.” And a whole lot of people who, on paper, were talking about AI’s output . . . autonomous weapon systems outperforming human beings would really balk at that point. How confident would you have to be to say, “We are going to put all our eggs in the unmanned basket for something like the next generation Strike Fighter or submarines.”? And it turns out I couldn’t get many takers for that, which was really interesting. I mean, I was talking to a community of people who, again, all said, “Look, AI is going to outperform human beings.” I said “OK, so let’s just build these systems. There’s no space for a human being on board.” People started to get really cagey. And de-skilling’s a real issue here because if we start to rely on these things then human beings quickly lose the skills. So you might say, “Let’s move forward with minotaur warfighting. But let’s keep, you know, in the back of our minds that we might have to switch back to the human generals if our adversary’s machines are beating our machines.” Well, I’m not sure human generals will actually maintain the skill set if they don’t get to fight real wars. At another level, I think there’s some questions here about the relationship between what we’re fighting for and how we’re fighting. So, say we end up with minotaur warfighting and we get more and more command decisions, as it were, made by machines. What happens if that starts to move back into our government processes? It could either be explicit—hand over the Supreme Court to the robots. Or it could be, in practice, now everything you see in the media is the result of some algorithm. At one level, I do think we need to take seriously these sorts of concerns about what human beings are doing and what decisions human beings are making because the point of victory will be for human beings to lead their lives. Now, all of that said, any given battle, it’s gonna be hard to avoid the thought that the machines are going to be better than us. And so we should hand over to them in order to win that battle. Scharre Yeah, I think this question of adoption is such a really interesting one because, like, we’ve been talking about human agency in these tasks. You know, flying a plane or being an infantry or, you know, a general making decisions. But there also is human agency as this question of do you use a technology in this way? And we could see it in lots of examples of AI technology, today—facial recognition for example. There are many different paradigms for how we’re seeing facial recognition used. For example, it’s used very differently in China today than in the United States. Different regulatory environment. Different societal adoption. That’s a choice that society or the government, whoever the powers that be, have. There’s a question of performance, and that’s always, I think, a challenge that militaries have with any new technology is when is it good enough that you go all in on the adoption, right? When are there airplanes, good enough that you then reorient your naval forces around carrier aviation? And that’s a difficult call to make. And if you go too early, you can make mistakes. If you go too late, you can make mistakes. And I think that’s one challenge. It’ll be interesting, I think, to see how militaries approach these things. My observation has been so far, (that) militaries have moved really slowly. Certainly much, much slower that what we’ve seen out in the civilian sector, where if you look at the rhetoric coming out of the Defense Department, they talk about AI a lot. And if you look at actually doing, it’s not very much. It’s pretty thin, in fact. Former Secretary of Defense Mark Esper, when he was the secretary, he had testified and said that AI was his number one priority. But it’s not. When you look at what the Defense Department is spending money on, it’s not even close. It’s about 1 percent of the DoD budget. So, it’s a pretty tiny fraction. And it’s not even in the top 10 for priorities. So, that, I think, is interesting because it drives choices and, historically, you can see that, particularly with things that are relevant to identity, that becomes a big factor in how militaries adopt a technology, whether it’s cavalry officers looking at the tank or when the Navy was transitioning from sail to steam. That was pushed back because sailors climbed the mast and worked the rigging. They weren’t down in the engine room, turning wrenches. That wasn’t what sailors did. And one of the interesting things to me is how these identities, in some cases, can be so powerful to a military service that they even outlast that task itself. We still call the people on ships sailors. They’re not actually climbing the mast or working the riggings; they’re not actually sailors, but we call them that. And so how militaries adopt these technologies, I think, is very much an open question with a lot of significance both from the military effectiveness standpoint and from an ethical standpoint. One of the things that’s super interesting to me that we are talking about some of these games like AI performance in chess and Go and computer games. And what’s interesting is that I think some of the attributes that are valued in games might be different than what the military values. So, when gaming environments, like in computer games like StarCraft and Dota 2, one of the things computers are very, very good at is operating with greater speed and precision than humans. So they’re very good at what’s termed the microplay—basically, the tactics of maneuvering these little artificial units around on this simulated battlefield. They’re effectively invincible in small unit tactics. So, if you let the AI systems play unconstrained, the AI units can dodge enemy fire. They are basically invincible. You have to dumb the AI systems down, then, to play against humans because when these companies, like Open AI or DeepMind, are training these agents, they’re not training them to do that. That’s actually easy. They’re trying to train them to do the longer term planning that humans are doing and processing information and making higher-level strategic decisions. And so they dumb down the speed at which the AI systems are operating. And you do get some really interesting higher-level strategic decision making from these AI systems. So, for example, in chess and Go, the AI systems have come up with new opening moves, in some cases that humans don’t really fully understand, like, why this is a good tactic? Sometimes they’ll be able to make moves that humans don’t fully understand why they’re valuable until further into the game and they could see, oh, that move had a really important change in the position on the board that turned out to be really valuable. And so, you can imagine militaries viewing these advantages quite differently. That something that was fast, that’s the kind of thing that militaries could see value in. OK, it’s got quick reaction times. Something that has higher precision they could see value in. Something where it’s gonna do something spooky and weird, and I don’t really understand why it’s doing it, but in the long run it’ll be valuable, I could see militaries not be excited about at all . . . and really hesitant. These are really interesting questions that militaries are going to have to grapple with and that have all of these important strategic and ethical implications going forward. Host Do you have any final thoughts you’d like to share before we go? Sparrow I kind of think that people will be really quick to adopt technologies that save their lives, for instance. Situational awareness/threat assessment. I think that is going to be adopted quite quickly. Targeting systems, I think will be adopted. We can take out an enemy weapon or platform more quickly because we’ve handed over targeting to an AI—I think that stuff will be adopted quite quickly. I think it’s gonna depend where in the institution one is. I’m a big fan of looking at people’s incentive structures. You know, take seriously what people say, but you should always keep in the back of the mind, what would someone like you say? This is a very hard space to be confident in, but I just encourage people not to just talk to the people like them but to take seriously what people lower down the hierarchy think. How they’re experiencing things. That question that Paul raised about do you go early in the hope of getting a decisive advantage or do you go late because you want to be conservative, those are sensible thoughts. As Paul said, it’s still quite early days for military AI. People should be, as they are, paying close attention to what’s happening in Ukraine at the moment, where, as I understand it, there is some targeting now being done by algorithms, and keep talking about it. Host Paul, last word to you, sir. Scharre Thank you, Stephanie and Rob for a great conversation, and, Rob, for just a really interesting and thoughtful paper . . . and really provocative. I think the issues that we’re talking about are just really going to be difficult ones for the defense community to struggle with going forward in terms of what are the tasks that should be done by humans versus machines. I do think there’s a lot of really challenging ethical issues. Oftentimes, ethical issues end up getting kind of short shrift because it’s like, well, who cares if we’re going to be minotaurs as long as it works? I think it’s worth pointing out that some of these issues get to the core of professional ethics. The context for war is a particular one, and we have rules for conduct and war (the law of war) that kind of write down what we think appropriate behavior is. But there are also interesting questions about military professional ethics of, like, you know, decisions about the use of force, for example, are the essence of the military profession. What are those things that we want military professionals to be in charge of . . . that we want them to be responsible for? You know, some of the most conservative people I’ve ever spoken to in these issues of autonomy are the military professionals themselves, who don’t want to give up the tasks that they’re doing. And sometimes I think for reasons that are good and make sense, and sometimes, for reasons that I think are a little bit stubborn and pigheaded. Sparrow Paul and Stephanie, I know you said last word to Paul, so I wanted to interrupt now rather than at the end. I think it’s worth asking, why would someone join the military in the future? Part of the problem here is a recruitment problem. If you say, “You’re going to be fodder for the machines,” why would people line up for that? You know, that question about military culture is absolutely spot on, but it matters to the effectiveness of the force, as well, because you can’t get people to take on the role. And the other thing is the decision to start a war, I mean, or even to start a conflict, for instance. That’s something that we shouldn’t hand over to the machines, but the same logic that is driving towards battlefield command is driving towards making decisions about first strikes, for instance. And that’s one thing we should resist is that some AI system says now’s the time to strike. For me, that’s a hard line. You don’t start a war on the basis of the choice of the machine. So just some examples, I think, to illustrate the points that Paul was making. Sorry, Paul. Scharre Not at all. All good points. I think these are gonna be the challenging questions going forward, and I think there’s going to be difficult issues ahead to grapple with when we think about how to employ these technologies in a way that’s effective that keep humans in charge and responsible for these kinds of decisions in war. Host Thank you both so much. Sparrow Thanks, Stephanie. And thank you, Paul. Scharre Thank you both. Really enjoyed the discussion. Host Listeners, you can find the genesisarticle@press.armywarcollege.edu/parameters look for volume 53, issue 1. If you enjoyed this episode of Decisive Point and would like to hear more, you can find us on any major podcast platform. About the authors Paul Scharre is the executive vice president and director of studies at CNAS. He is the award-winning author of Four Battlegrounds: Power in the Age of Artificial Intelligence. His first book, Army of None: Autonomous Weapons and the Future of War, won the 2019 Colby Award, was named one of Bill Gates’ top five books of 2018, and was named by The Economist one of the top five books to understand modern warfare. Scharre previously worked in the Office of the Secretary of Defense (OSD) where he played a leading role in establishing policies on unmanned and autonomous systems and emerging weapons technologies. He led the Department of Defense (DoD) working group that drafted DoD Directive 3000.09, establishing the department’s policies on autonomy in weapon systems. He also led DoD efforts to establish policies on intelligence, surveillance, and reconnaissance programs and directed energy technologies. Scharre was involved in the drafting of policy guidance in the 2012 Defense Strategic Guidance, 2010 Quadrennial Defense Review, and secretary-level planning guidance. Robert J. Sparrow is a professor in the philosophy program and an associate investigator in the Australian Research Council Centre of Excellence for Automated Decision-making and Society (CE200100005) at Monash University, Australia, where he works on ethical issues raised by new technologies. He has served as a cochair of the Institute of Electrical and Electronics Engineers Technical Committee on Robot Ethics and was one of the founding members of the International Committee for Robot Arms Control.