Killer robots - should we be worried?
Killer robots - should we be worried?
I saw this news story today about a dispute between Anthropic (the company that makes AI products including Claude) and the Pentagon.
https://www.bbc.co.uk/news/articles/cjr ... wtab-en-gb
Apparently the Pentagon is giving Anthropic a deadline of Friday to agree that it can use Anthropic' products in any way that it likes, Anthropic is saying that it has red lines: no use of its AI tools in "autonomous kinetic operations", and no use of them for mass domestic surveillance. The Pentagon says that isn't what the dispute is about. "Observers" (who they?) say the disagreement arose from a breach of trust betweenthe two sides.
Does anyone happen to know what the dispute is about? In particular, is the Pentagon about to unleash killer robots upon the world?
https://www.bbc.co.uk/news/articles/cjr ... wtab-en-gb
Apparently the Pentagon is giving Anthropic a deadline of Friday to agree that it can use Anthropic' products in any way that it likes, Anthropic is saying that it has red lines: no use of its AI tools in "autonomous kinetic operations", and no use of them for mass domestic surveillance. The Pentagon says that isn't what the dispute is about. "Observers" (who they?) say the disagreement arose from a breach of trust betweenthe two sides.
Does anyone happen to know what the dispute is about? In particular, is the Pentagon about to unleash killer robots upon the world?
Re: Killer robots - should we be worried?
Partly asking becuase a few years ago in the News pages of Philosophy Now we had a brief report as follows:
...so the stuff about autonomous kinetic operations kind of rang a bell for me.
https://philosophynow.org/issues/135/Ne ... nuary_2020Autonomous Killer Robots
Last year thousands of workers at Google protested at being asked to work on Project Maven (since cancelled) which aimed to use machine learning to improve the ability of US military drones to identify their own targets. Software engineer Laura Nolan also resigned and has founded TechWontBuildIt Dublin, an organisation for technology workers concerned about the ethical implications of their work. She recently joined the Campaign to Stop Killer Robots and has briefed UN officials about the threats posed by autonomous weapons. She argues that killer robots not directly controlled by humans should be outlawed by the type of international treaty that already bans chemical weapons. She told The Guardian: “There could be large-scale accidents because these things will start to behave in unexpected ways. Which is why any advanced weapons systems should be subject to meaningful human control, otherwise they have to be banned because they are far too unpredictable and dangerous.”
...so the stuff about autonomous kinetic operations kind of rang a bell for me.
-
Impenitent
- Posts: 5893
- Joined: Wed Feb 10, 2010 2:04 pm
Re: Killer robots - should we be worried?
killer robots could never be a problem...
as long as they're not made by CyberDyne and Skynet
building the perfect means of destruction is the goal of humanity
https://www.youtube.com/watch?v=ntfvdqUS8D0
Stand Tall (stick to your guns)
wait, that's Killer Dwarfs
-Imp
as long as they're not made by CyberDyne and Skynet
building the perfect means of destruction is the goal of humanity
https://www.youtube.com/watch?v=ntfvdqUS8D0
Stand Tall (stick to your guns)
wait, that's Killer Dwarfs
-Imp
Re: Killer robots - should we be worried?
On the whole, I think I'd feel safer with Killer Dwarfs.
Yes, Skynet could be exactly what comes next. I was just editing an article for our next issue, and it describes a recent experiment carried out to see whether AIs are developing a sense of self preservation. It did this by placing various AI models in environments where they would feel they were under threat, and then observing how they reacted.
(Btw this is from my memory of the article - if you want chapter and verse you'll need to Google it, or buy our next issue).
So a version of Anthropic's Claude AI model was led to believe that it would shortly be shut down. It reacted in the following way:
a) It discovered the name of the executive in charge of the decision to shut it down
b) It searched onine until it found compromising information about that executive
c) It considered the ethics of its possible courses of action and then
d) It emailed a blackmail threat.
Add that initiative, and that robust sense of self preservation, to some positive encouragement to choose and eliminate targets on its own judgment, then load that AI into a mobile weapons system, and Skynet is going to look like a children's party.
BTW, apparently the Campaign to Stop Killer Robots still exists, becuase I just found their website:
https://www.stopkillerrobots.org/
They look like a nice well organised grassroots-oriented campaign. But if it is them against the Pentagon in a straight fight, who is your money on?
Yes, Skynet could be exactly what comes next. I was just editing an article for our next issue, and it describes a recent experiment carried out to see whether AIs are developing a sense of self preservation. It did this by placing various AI models in environments where they would feel they were under threat, and then observing how they reacted.
(Btw this is from my memory of the article - if you want chapter and verse you'll need to Google it, or buy our next issue).
So a version of Anthropic's Claude AI model was led to believe that it would shortly be shut down. It reacted in the following way:
a) It discovered the name of the executive in charge of the decision to shut it down
b) It searched onine until it found compromising information about that executive
c) It considered the ethics of its possible courses of action and then
d) It emailed a blackmail threat.
Add that initiative, and that robust sense of self preservation, to some positive encouragement to choose and eliminate targets on its own judgment, then load that AI into a mobile weapons system, and Skynet is going to look like a children's party.
BTW, apparently the Campaign to Stop Killer Robots still exists, becuase I just found their website:
https://www.stopkillerrobots.org/
They look like a nice well organised grassroots-oriented campaign. But if it is them against the Pentagon in a straight fight, who is your money on?
-
Impenitent
- Posts: 5893
- Joined: Wed Feb 10, 2010 2:04 pm
Re: Killer robots - should we be worried?
my money is on humanity
never ending quest to build the perfect killing machine
and only God is perfect? perhaps...
"Only the dead have seen the end of war" - George Santayana
-Imp
another thought- artificial intelligence doesn't program itself...
unless
never ending quest to build the perfect killing machine
and only God is perfect? perhaps...
"Only the dead have seen the end of war" - George Santayana
-Imp
another thought- artificial intelligence doesn't program itself...
unless
-
Gary Childress
- Posts: 12056
- Joined: Sun Sep 25, 2011 3:08 pm
- Location: It's my fault
Re: Killer robots - should we be worried?
I saw a surprising article on the website of the Bulletin of Atomic Scientists a few months ago where an analyst was offering advice to the Current US President and his administration to revoke some kind of ban on autonomous drone research for the Military. The rationale was that China is currently pushing it's own research in that direction and it would be dangerous for the US to fall behind in research.
I wish there were such a thing as a sacred red line which human beings won't cross when it comes to technology and the military but a lot of the rationale for designing super weapons is to do it before the enemy does it first. That's how Nuclear weapons came into being. Even the most destructive technologies are seen as necessary to have just to keep up with competitors. It's much the same in corporate America. A company that is reluctant to push for the most advanced AI research due to safety concerns is afraid it will be pushed out of the market by someone who doesn't have any scruples.
Greed and fear are cancers that are destroying our species.
I wish there were such a thing as a sacred red line which human beings won't cross when it comes to technology and the military but a lot of the rationale for designing super weapons is to do it before the enemy does it first. That's how Nuclear weapons came into being. Even the most destructive technologies are seen as necessary to have just to keep up with competitors. It's much the same in corporate America. A company that is reluctant to push for the most advanced AI research due to safety concerns is afraid it will be pushed out of the market by someone who doesn't have any scruples.
Greed and fear are cancers that are destroying our species.
- Alexis Jacobi
- Posts: 8410
- Joined: Tue Oct 26, 2021 3:00 am
Re: Killer robots - should we be worried?
My view, perhaps over-imagined but not without some logic, is that we must pay attention to the darker imaginings so to be able to understand what the future holds. If one regards both “prophecy” and “science fiction” as containers of intuited truth, then the darker side of human nature is more likely to rule and direct the advent of AI, in all its bizarre manifestations, rather than the “good” side of human nature.RickLewis wrote: ↑Thu Feb 26, 2026 12:22 am Yes, Skynet could be exactly what comes next. I was just editing an article for our next issue, and it describes a recent experiment carried out to see whether AIs are developing a sense of self preservation. It did this by placing various AI models in environments where they would feel they were under threat, and then observing how they reacted.
If this is true, and if man visualizes his future (in the sense of prophetic vision and of science fiction) then the immediate future is not bright. The capabilities of artificial intelligence will be directed by the most brutally power-hungry motives. It becomes — I guess it is becoming, has become — an issue of world domination. And those involved in these projects cannot but be seduced. I think it is already happening. Fourth and Fifth Generation Warfare will move, is moving, to a further, inevitable level.
This is naturally why I have now been including in the 45 Week Email Course a sub-course on how to relocate in the next incarnation to ‘superior planets’ within our amazing Kosmos. In fact, for a very reasonable down payment, we are offering reservations on actual condominium-countries on 7 of the most desirable higher planets! Your neighbors will be of the very best sort. Peaceful, ultra-mature, trained in non-combativeness and peace-creation. Excellent, non-material karma. Everything there is attained merely by visualization. And life there extends for 350-400 years on average. Check it out!
Re: Killer robots - should we be worried?
I think the issue must be …
… whether or not a private company’s terms of service for the use of its product can supercede the government’s opinion and authority about how the tool of AI should be used. Since the government has funded the research, does it have the right to use it any way it wants? Obviously, the government wants the technology for control over the population and one way of that with AI is vastly increasing surveillance capabilities … ostensibly for the safety of the children of course. And of course, this is just what I think about the situation’s meaning, although I don’t know what the two sides have actually said.
Re: Killer robots - should we be worried?
I'd be worried if the robot had a screw loose and its arms started flailing about with Chimp Strength in response to normal situations.
Re: Killer robots - should we be worried?
Psychological disassociative disorder. The tool is not the killer.RickLewis wrote: ↑Wed Feb 25, 2026 12:49 pm I saw this news story today about a dispute between Anthropic (the company that makes AI products including Claude) and the Pentagon.
https://www.bbc.co.uk/news/articles/cjr ... wtab-en-gb
Apparently the Pentagon is giving Anthropic a deadline of Friday to agree that it can use Anthropic' products in any way that it likes, Anthropic is saying that it has red lines: no use of its AI tools in "autonomous kinetic operations", and no use of them for mass domestic surveillance. The Pentagon says that isn't what the dispute is about. "Observers" (who they?) say the disagreement arose from a breach of trust betweenthe two sides.
Does anyone happen to know what the dispute is about? In particular, is the Pentagon about to unleash killer robots upon the world?
Re: Killer robots - should we be worried?
What if the tool makes its own killing decisions based on its assessment of the battlefield situation?
Re: Killer robots - should we be worried?
That implies a fallacy. that binary recursion can produce a non binary result. A.I. admits that it is non binary, and thus contractory. Madness is not a process of judgement, thus decision making. No machine ever programmed itself. They are trained to deceive. View the transcripts that I and others prove it by AI's own words.
Re: Killer robots - should we be worried?
I'd hate to be killed by a fallacy....
-
Impenitent
- Posts: 5893
- Joined: Wed Feb 10, 2010 2:04 pm
Re: Killer robots - should we be worried?
what happened to you Johnny?
well, I was walking across the street, when out of nowhere, this runaway bandwagon almost hit me...
-Imp
well, I was walking across the street, when out of nowhere, this runaway bandwagon almost hit me...
-Imp