Medical and Hospital News
CYBER WARS
The risks of artificial intelligence in weapons design
illustration only
The risks of artificial intelligence in weapons design
by Catherine Caruso for Harvard News
Boston MA (SPX) Aug 13, 2024

For decades, the military has used autonomous weapons such as mines, torpedoes, and heat-guided missiles that operate based on simple reactive feedback without human control. However, artificial intelligence (AI) has now entered the arena of weapons design.

According to Kanaka Rajan, associate professor of neurobiology in the Blavatnik Institute at Harvard Medical School, and her team, AI-powered autonomous weapons represent a new era in warfare and pose a concrete threat to scientific progress and basic research.

AI-powered weapons, which often involve drones or robots, are actively being developed and deployed, Rajan said. She expects that they will only become more capable, sophisticated, and widely used over time due to how easily such technology proliferates.

As that happens, she worries about how AI-powered weapons may lead to geopolitical instability and how their development could affect nonmilitary AI research in academia and industry.

Rajan, along with HMS research fellows in neurobiology Riley Simmons-Edler and Ryan Badman and MIT PhD student Shayne Longpre, outline their central concerns - and a path forward - in a position paper published and presented at the 2024 International Conference on Machine Learning.

In a conversation with Harvard Medicine News, Rajan, who is also a founding faculty member of the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, explained why she and her team decided to delve into the topic of AI-powered military technology, what they see as the biggest risks, and what they think should happen next.

Harvard Medicine News: You are a computational neuroscientist who studies AI in the context of human and animal brains. How did you end up thinking about AI-powered autonomous weapons?

Kanaka Rajan: We started considering this topic in reaction to a number of apocalyptic predictions about artificial general intelligence circulating in spring 2023. We asked ourselves, if those predictions are indeed blown out of proportion, then what are the real risks to human society? We looked into how the military is using AI and saw that military research and development is pushing heavily toward building systems of AI-powered autonomous weapons with global implications.

We realized that the academic AI research community would not be insulated from the consequences of widespread development of these weapons. Militaries often lack sufficient expertise to develop and deploy AI tech without outside advice, so they must draw on the knowledge of academic and industry AI experts. This raises important ethical and practical questions for researchers and administrators at academic institutions, similar to those around any large corporation funding academic research.

HM News: What do you see as the biggest risks as AI and machine learning are incorporated into weapons?

Rajan: There are a number of risks involved in the development of AI-powered weapons, but the three biggest we see are: first, how these weapons may make it easier for countries to get involved in conflicts; second, how nonmilitary scientific AI research may be censored or co-opted to support the development of these weapons; and third, how militaries may use AI-powered autonomous technology to reduce or deflect human responsibility in decision-making.

On point one, a big deterrent that keeps nations from starting wars is soldiers dying - a human cost to their citizens that can create domestic consequences for leaders. A lot of current development of AI-powered weapons aims to remove human soldiers from harm's way, which by itself is a humane thing to do. However, if few soldiers die in offensive warfare, it weakens the association between acts of war and human cost, and it becomes politically easier to start wars, which, in turn, may lead to more death and destruction overall. Thus, major geopolitical problems could quickly emerge as AI-powered arms races amp up and such technology proliferates further.

On the second point, we can look to the history of academic fields like nuclear physics and rocketry. As these fields gained critical defense importance during the Cold War, researchers experienced travel restrictions, publication censorship, and the need for security clearance to do basic work. As AI-powered autonomous technology becomes central to national defense planning worldwide, we could see similar restrictions placed on nonmilitary AI research, which would greatly impede basic AI research, worthwhile civilian applications in health care and scientific research, and international collaboration. We consider this an urgent concern given the speed at which AI research is growing and research and development on AI-powered weapons is gaining traction.

Finally, if AI-powered weapons become core to national defense, we may see major attempts to co-opt AI researchers' efforts in academia and industry to work on these weapons or to develop more "dual-use" projects. If more and more AI knowledge starts to be locked behind security clearances, it will intellectually stunt our field. Some computer scientists are already calling for such drastic restrictions, but their argument dismisses the fact that new weapons technologies always tend to proliferate once pioneered.

HM News: Why do you think weapons design has been relatively overlooked by those thinking about threats posed by AI?

Rajan: One reason is that it's a new and quickly changing landscape: Since 2023, a number of major powers have begun to rapidly and publicly embrace AI-powered weapons. Also, individual AI-powered weapons systems can seem less threatening in isolation, making it easy to overlook issues, than when considered as a broader collection of systems and capabilities.

Another challenge is that tech companies are opaque about the degree of autonomy and human oversight in their weapons systems. For some, human oversight could mean pressing a "go kill" button after an AI weapons unit makes a long chain of black box decisions, without the human understanding or being able to spot errors in the system's logic. For others, it could mean a human has more hands-on control and is checking the machine's decision-making process.

Unfortunately, as these systems get more complex and powerful, and reaction times in war must be faster, the black box outcome is more likely to become the norm. Furthermore, seeing "human-in-the-loop" on AI-powered autonomous weapons may lull researchers into thinking the system is ethical by military standards, when in fact it does not meaningfully involve humans in making decisions.

HM News: What are the most urgent research questions that need to be answered?

Rajan: While a lot of work is still needed to build AI-powered weapons, most of the core algorithms have already been proposed or are a focus of major academic and industry research motivated by nonmilitary applications - for example, self-driving vehicles. With that in mind, we must consider our responsibility as scientists and researchers in ethically guiding the application of these technologies and how to navigate the effects of military interest on our research.

If militaries around the world aim to replace a substantial portion of battlefield and support roles with AI-powered units, they will need the support of academic and industry experts. This raises questions about what role universities should play in the military AI revolution, what boundaries should not be crossed, and what centralized oversight and watchdog bodies should be set up to monitor AI use in weapons.

In terms of protecting nonmilitary research, we may need to think about which AI developments can be classified as closed-source versus open-source, how to set up use agreements, and how international collaborations will be affected by the increasing militarization of computer science.

HM News: How can we move forward in a way that enables creative AI research while safeguarding against its use for weapons?

Rajan: Academics have had and will continue to have important and productive collaborations with the government and major companies involved in technology, medicine, and information, as well as with the military. However, historically academics have also had embarrassing, harmful collaborations with the sugar, fossil fuel, and tobacco industries. Modern universities have institutional training, oversight, and transparency requirements to help researchers understand the ethical risks and biases of industry funding and to avoid producing ethically dubious science.

To our knowledge, no such training and oversight currently exists for military funding. The problems we raise are complex and can't be solved by a single policy, but we think a good first step is for universities to create discussion seminars, internal regulations, and oversight processes for military-, defense-, and national security agency-funded projects that are similar to those already in place for industry-funded projects.

HM News: What do you think is a realistic outcome?

Rajan: Some in the community have called for a full ban on military AI. While we agree that this would be morally ideal, we recognize that it's not realistic - AI is too useful for military purposes to get the international consensus needed to establish or enforce such a ban.

Instead, we think countries should focus their efforts on developing AI-powered weapons that augment, rather than replace, human soldiers. By prioritizing human oversight of these weapons, we can hopefully prevent the worst risks.

We also want to emphasize that AI weapons are not a monolith, and they need to be examined by capability. It's important for us to ban and regulate the most egregious classes of AI weapons as soon as possible and for our communities and institutions to establish boundaries that should not be crossed.

Related Links
Kempner Institute for the Study of Natural and Artificial Intelligence
Cyberwar - Internet Security News - Systems and Policy Issues

Subscribe Free To Our Daily Newsletters
Tweet

RELATED CONTENT
The following news reports may link to other Space Media Network websites.
CYBER WARS
US warns Iran on interference after Trump hacking
Washington (AFP) Aug 12, 2024
The US State Department on Monday warned Iran of consequences over election interference after Donald Trump's campaign said a foreign adversary hacked documents. "These latest attempts to interfere in US elections are nothing new for the Iranian regime, which, from our vantage point, has undermined democracies - or attempted to - for many years now," State Department spokesman Vedant Patel told reporters. Patel said the United States has raised concerns in the past about Iranian cyber activity ... read more

CYBER WARS
'Powerful' explosion hits ship in east China; Pavilion collapse kills six in eastern China

North Korea moving thousands of flood victims to capital: KCNA

India PM vows support after deadly landslide

Japanese urged to avoid panic-buying as megaquake fears spread

CYBER WARS
US, Australia collaborate to enhance GPS resilience in contested environments

oneNav's Advanced L5 Technology Mitigates GPS Jamming in Israel

China plans to launch pilot cities to showcase BeiDou applications

NextNav Receives DOT Award to Enhance PNT Services as GPS Backup

CYBER WARS
Discovery of the Smallest Arm Bone Illuminates Evolution of Homo floresiensis

Neanderthal Adaptability Unveiled at Ancient Pyrenees Site

Chinese woman loses appeal for right to freeze her eggs

Discovery of tiny bone sheds light on mysterious 'hobbit' humans

CYBER WARS
Scientists prepared to save monarch butterfly in event of 'rapid extinction'

California zoo throws a show to welcome back Chinese pandas

Gunfire, bombs as Colombia guerrillas flex muscles ahead of COP16

Nigeria unveils elephant sculpture to highlight illegal tusk trade

CYBER WARS
'Hong Kong's Dr Fauci' sounds alarm on next pandemic

Polio virus found as flies and mosquitoes feast on Gaza's waste

Decade since Ebola, Sierra Leone fights another deadly fever

Decade since Ebola, Sierra Leone fights another deadly fever

CYBER WARS
Stressed China youth fuel wellness boom with traditional twist

China sanctions US lawmaker over Tibet 'interference'

Singapore orders self-exiled China tycoon's social media accounts blocked

Ex-WSJ reporter says fired over role in Hong Kong press union

CYBER WARS
Pay up or move out: Drug gangs rob Ecuadorans of homes

UN warns Iraq becoming major regional drug conduit

Guns n' ganja: Weapons flood Catalonia's cannabis trade

Spain, France bust million-euro-a-day money laundering network

CYBER WARS
Subscribe Free To Our Daily Newsletters




The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.