Activists from the Campaign to Stop Killer Robots, a coalition of non-governmental organisations opposing lethal autonomous weapons or so-called ‘killer robots’, stage a protest at Brandenburg Gate in Berlin, Germany, in March 2019. Photograph: Annegret Hilse/Reuters
Included on this page are resources on the topic of “lethal autonomous weapons” dating back to 2013. This issue has been in discussion and development for many years.
MAIN TOPICS INCLUDED BELOW
Stop Killer Robots, Advocary Group
Documentary, Immoral Code
Negotiating a Treaty on Autonomous Weapons Systems
Robot Attack Dogs
STOP KILLER ROBOTS
The Stop Killer Robots campaign
2min YouTube video
From their website. “Technology should be used to empower all people, not to reduce us – to stereotypes, labels, objects, or just a pattern of 1’s and 0’s.
With growing digital dehumanisation, the Stop Killer Robots coalition works to ensure human control in the use of force. Our campaign calls for new international law on autonomy in weapons systems.”
From Stop Killer Robots website
“Immoral Code is a documentary that contemplates the impact of Killer Robots in an increasingly automated world – one where machines make decisions over who to kill or what to destroy.
Automated decisions are being introduced across all parts of society. From pre-programmed bias to data protection and privacy concerns, there are real limitations to these algorithms – especially when that same autonomy is applied to weapons systems.
Life and death decisions are not black and white, on or off, 1s and 0s. They’re complex, and difficult. Reducing these decisions down to automated processes raise many legal, ethical, technical, and security concerns.
The film examines whether there are situations where it’s morally and socially acceptable to take life, and importantly – would a computer know the difference?”
Discover more at www.immoralcode.io
In The News – Stop Killer Robots website
70 states deliver joint statement on autonomous weapons systems at UN General Assembly
For the first time at the United Nations General Assembly, states across the world united in delivering a joint statement on autonomous weapons systems. With a total of 70 states joining, this was the largest cross-regional group statement ever made throughout UN discussions on the issue.
Watch video at this link, 7 min video: Joint Statement on Lethal Autonomous Weapons Systems to be delivered on 21/10/2022 at UNGA 1C, New York
This briefing paper sets out a positive vision to encourage governments to commence negotiations on a new treaty on autonomous weapons systems.
“A new treaty on autonomous weapons systems would have an historic impact on the relationship between humans and technology for generations to come. The ability to achieve this milestone requires certain foundations to be established in order to initiate negotiations and ensure their successful progress to completion. After 9 years of international discussions at the CCW, the core ingredients for commencing this process are now in place to make a new treaty possible.”
The following graphic is from Stop Killer Robots website,
ideas for ways to engage and get involved
What are the legal concerns when governments turn to machines to end human life?
UVA Today asked University of Virginia law professor Ashley Deeks, who has studied this intersection, to weigh in.
“These are not autonomous systems that have could independently select who to use force against. Police officers will be operating them, even if remotely and from some distance. So calling them “killer robots” could be a little misleading.”
“A significant part of what militaries do during wartime is identify and kill enemy forces and destroy their enemy’s military equipment. AI tools are well-suited to help militaries make predictions about where particular targets will be located and which strikes will help win the war.”
“There is a heated debate about
whether states should ever deploy lethal autonomous systems
that can decide on their own who or what to target,
the idea is that those systems would be
deployed during wartime, not peacetime.”
The US has rejected calls for a binding agreement regulating or banning the use of “killer robots”, instead proposing a “code of conduct” at the United Nations.
Speaking at a meeting in Geneva focused on finding common ground on the use of such so-called lethal autonomous weapons, a US official balked at the idea of regulating their use through a “legally-binding instrument”.
Thanks to the team at WantToKnow.info
For summary on this news piece
Last week, an Israeli defense company painted a frightening picture. In a roughly two-minute video on YouTube that resembles an action movie, soldiers out on a mission are suddenly pinned down by enemy gunfire and calling for help. In response, a tiny drone zips off its mother ship to the rescue, zooming behind the enemy soldiers and killing them with ease. While the situation is fake, the drone — unveiled last week by Israel-based Elbit Systems — is not.
The Lanius, which in Latin can refer to butcherbirds, represents a new generation of drone: nimble, wired with artificial intelligence, and able to scout and kill. The machine is based on racing drone design, allowing it to maneuver into tight spaces, such as alleyways and small buildings. After being sent into battle, Lanius’s algorithm can make a map of the scene and scan people, differentiating enemies from allies — feeding all that data back to soldiers who can then simply push a button to attack or kill whom they want.
For weapons critics, that represents a nightmare scenario, which could alter the dynamics of war. “It’s extremely concerning,” said Catherine Connolly, an arms expert at Stop Killer Robots, an anti-weapons advocacy group. “It’s basically just allowing the machine to decide if you live or die if we remove the human control element for that.” According to the drone’s data sheet, the drone is palm-size, roughly 11 inches by 6 inches. It has a top speed of 45 miles per hour. It can fly for about seven minutes, and has the ability to carry lethal and nonlethal materials.
Note: US General Paul Selva has warned against employing killer robots in warfare for ethical reasons.
ROBOT ATTACK DOGS
Killer robots that can attack targets without any human input
“should not have the power of life and death over human beings”
Terrifying video shows Chinese robot attack dog with machine gun dropped by drone
By Michael Lee
Oct 26, 2022
Black Mirror’s killer robot dogs become a reality
Terrifying four-legged bot with a 6.5mm SNIPER RIFLE on its back is unveiled at the US Army trade show and can precisely fire at targets 3/4 of a mile away
Oct 14 2021
Thanks to the team at WantToKnow.info
For summary on this news piece
US general warns of out-of-control killer robots
CNN News, July 18, 2017
America’s second-highest ranking military officer, Gen. Paul Selva, advocated Tuesday for “keeping the ethical rules of war in place lest we unleash on humanity a set of robots that we don’t know how to control.” Selva was responding to a question from Sen. Gary Peters, a Michigan Democrat, about his views on a Department of Defense directive that requires a human operator to be kept in the decision-making process when it comes to the taking of human life by autonomous weapons systems. Peters said the restriction was “due to expire later this year.”
“I don’t think it’s reasonable for us to put robots in charge
of whether or not we take a human life,”
Selva told the Senate Armed Services Committee during a confirmation hearing for his reappointment as the vice chairman of the Joint Chiefs of Staff. He predicted that “there will be a raucous debate in the department about whether or not we take humans out of the decision to take lethal action,” but added that he was “an advocate for keeping that restriction.” Selva said humans needed to remain in the decision making process “because we take our values to war.”
His comments come as the US military has sought increasingly autonomous weapons systems. Reference article Navy seeks autonomous drones despite warnings from critics Feb 16 2016
Note: In ‘another article’ Tesla founder Elon Musk’s warns against the dangers of AI without regulation. Reference: Elon Musk Says Artificial Intelligence Is the ‘Greatest Risk We Face as a Civilization’ July 15 2017
A 2013 report for the U.N. Human Rights Commission called for a worldwide moratorium on the “testing, production, assembly, transfer, acquisition, deployment and use” of killer robots until an international conference can develop rules for their use.
We thank the team at WantToKnow’s for their summary of this
UN report wants to terminate killer robots, opposes life-or-death powers over humans
Key Excerpts from Article on Website of Washington Post/Associated Press
Killer robots that can attack targets without any human input “should not have the power of life and death over human beings,” a new draft U.N. report says. The report for the U.N. Human Rights Commission … deals with legal and philosophical issues involved in giving robots lethal powers over humans.
Report author Christof Heyns, a South African professor of human rights law, calls for a worldwide moratorium on the “testing, production, assembly, transfer, acquisition, deployment and use” of killer robots until an international conference can develop rules for their use. The United States, Britain, Israel, South Korea and Japan have developed various types of fully or semi-autonomous weapons.
Heyns focuses on a new generation of weapons that choose their targets and execute them. He calls them “lethal autonomous robotics,” or LARs for short, and says: “Decisions over life and death in armed conflict may require compassion and intuition. Humans — while they are fallible — at least might possess these qualities, whereas robots definitely do not.”
The report goes beyond the recent debate over drone killings. Drones do have human oversight. The killer robots are programmed to make autonomous decisions on the spot without orders from humans. “Lethal autonomous robotics (LARs) … would add a new dimension to this distancing [i.e., the remote control of drones], in that targeting decisions could be taken by the robots themselves. In addition to being physically removed from the kinetic action, humans would also become more detached from decisions to kill – and their execution,” he wrote.