AI danger
-
Martin Peter Clarke
- Posts: 1617
- Joined: Tue Apr 01, 2025 9:54 pm
Re: AI danger
The survival of the WEIRD world, which cradles us to the grave, is dependent on the deterrent use of weapons all the way up to MADness. AI is being used in that war now.
Re: AI danger
Can you be more explicit? Do you refer to politically despotic regimes ?And perhaps suggest how to deal with the problem of AI tyranny as I did, above.Martin Peter Clarke wrote: ↑Thu Jun 26, 2025 12:45 pm The survival of the WEIRD world, which cradles us to the grave, is dependent on the deterrent use of weapons all the way up to MADness. AI is being used in that war now.
I really don't know what "the WEIRD world" means. I don't understand what MAdness means with that queer spelling using upper case letters.
-
Martin Peter Clarke
- Posts: 1617
- Joined: Tue Apr 01, 2025 9:54 pm
Re: AI danger
Western Industrialized Educated Rich Democratic. Us. Mutually Assured Destruction.Belinda wrote: ↑Thu Jun 26, 2025 1:01 pmCan you be more explicit? Do you refer to politically despotic regimes ?And perhaps suggest how to deal with the problem of AI tyranny as I did, above.Martin Peter Clarke wrote: ↑Thu Jun 26, 2025 12:45 pm The survival of the WEIRD world, which cradles us to the grave, is dependent on the deterrent use of weapons all the way up to MADness. AI is being used in that war now.
I really don't know what "the WEIRD world" means. I don't understand what MAdness means with that queer spelling using upper case letters.
Re: AI danger
So UK and US are each using AI as a defensive and/or offensive weapon?Martin Peter Clarke wrote: ↑Thu Jun 26, 2025 2:00 pmWestern Industrialized Educated Rich Democratic. Us. Mutually Assured Destruction.Belinda wrote: ↑Thu Jun 26, 2025 1:01 pmCan you be more explicit? Do you refer to politically despotic regimes ?And perhaps suggest how to deal with the problem of AI tyranny as I did, above.Martin Peter Clarke wrote: ↑Thu Jun 26, 2025 12:45 pm The survival of the WEIRD world, which cradles us to the grave, is dependent on the deterrent use of weapons all the way up to MADness. AI is being used in that war now.
I really don't know what "the WEIRD world" means. I don't understand what MAdness means with that queer spelling using upper case letters.
Are US and UK AI military uses ethical under OpenAI standards?
Use Case Likely Alignment with OpenAI Standards Explanation
Defensive cybersecurityAcceptable (with caution) If it protects infrastructure and doesn’t target civilians or violate privacy.
Logistics and maintenanceAcceptable Non-lethal, non-weaponized support applications.
Surveillance/reconnaissanceProblematic Depends heavily on context — if it infringes on privacy or civil rights, it violates OpenAI’s guidelines.
AI-assisted targetingUnacceptable Involves decisions that could lead to lethal outcomes — outside OpenAI’s allowed uses.
Autonomous weapons or drone swarmsUnacceptable OpenAI explicitly bans involvement in weapon operation or development.
Simulation or wargaming for offenseUnacceptable Simulations tied to offensive strategy violate usage policy.
Does OpenAI directly contribute to military projects?
No. OpenAI has publicly stated that:
“We do not allow our tools to be used for weapons development, military operations, or surveillance in violation of human rights.”
If a government or contractor were using OpenAI tools for such purposes, it would be a violation of the license agreement and usage policies.
Ethical Takeaway
From OpenAI’s perspective, some defensive, non-lethal uses of AI in national security may be acceptable.
However, lethal, autonomous, or offensive applications are considered unethical and prohibited.
Both the US and UK militaries are pursuing capabilities that would not be permitted under OpenAI’s policies, especially when AI is applied to targeting, autonomous weapons, or surveillance without strong civil protections.
If you’re interested, I can show how other AI organizations (like DeepMind, Anthropic, or the EU's AI Act) address these same ethical concerns.
-
Martin Peter Clarke
- Posts: 1617
- Joined: Tue Apr 01, 2025 9:54 pm
Re: AI danger
They bloody well better be!Belinda wrote: ↑Thu Jun 26, 2025 5:29 pmSo UK and US are each using AI as a defensive and/or offensive weapon?Martin Peter Clarke wrote: ↑Thu Jun 26, 2025 2:00 pmWestern Industrialized Educated Rich Democratic. Us. Mutually Assured Destruction.Belinda wrote: ↑Thu Jun 26, 2025 1:01 pm Can you be more explicit? Do you refer to politically despotic regimes ?And perhaps suggest how to deal with the problem of AI tyranny as I did, above.
I really don't know what "the WEIRD world" means. I don't understand what MAdness means with that queer spelling using upper case letters.
Are US and UK AI military uses ethical under OpenAI standards?
Use Case Likely Alignment with OpenAI Standards Explanation
Defensive cybersecurityAcceptable (with caution) If it protects infrastructure and doesn’t target civilians or violate privacy.
Logistics and maintenanceAcceptable Non-lethal, non-weaponized support applications.
Surveillance/reconnaissanceProblematic Depends heavily on context — if it infringes on privacy or civil rights, it violates OpenAI’s guidelines.
AI-assisted targetingUnacceptable Involves decisions that could lead to lethal outcomes — outside OpenAI’s allowed uses.
Autonomous weapons or drone swarmsUnacceptable OpenAI explicitly bans involvement in weapon operation or development.
Simulation or wargaming for offenseUnacceptable Simulations tied to offensive strategy violate usage policy.
Does OpenAI directly contribute to military projects?
No. OpenAI has publicly stated that:
“We do not allow our tools to be used for weapons development, military operations, or surveillance in violation of human rights.”
If a government or contractor were using OpenAI tools for such purposes, it would be a violation of the license agreement and usage policies.
Ethical Takeaway
From OpenAI’s perspective, some defensive, non-lethal uses of AI in national security may be acceptable.
However, lethal, autonomous, or offensive applications are considered unethical and prohibited.
Both the US and UK militaries are pursuing capabilities that would not be permitted under OpenAI’s policies, especially when AI is applied to targeting, autonomous weapons, or surveillance without strong civil protections.
If you’re interested, I can show how other AI organizations (like DeepMind, Anthropic, or the EU's AI Act) address these same ethical concerns.
-
commonsense
- Posts: 5380
- Joined: Sun Mar 26, 2017 6:38 pm
Re: AI danger
If AI can already write its own programs, it’s already too late to regulate it
-
commonsense
- Posts: 5380
- Joined: Sun Mar 26, 2017 6:38 pm
Re: AI danger
If AI can already write its own programs, it’s already too late to regulate it
Re: AI danger
What's to stop you or I inaugurating a public council for democratic control of AI corporations?commonsense wrote: ↑Thu Jun 26, 2025 7:48 pmIf AI can already write its own programs, it’s already too late to regulate it![]()
ChatGPT tells me it's a tool trained by and continually monitored by the humans that own it. It does not write its own programmes.
Silicone Valley has a corporate tradition of which it is aware. The humans there are not stupid and are also aware of the necessity for constantly reviewed ethics.
Some corporations are less ethical than others which is why a democratic public council is our next step.
- accelafine
- Posts: 5042
- Joined: Sat Nov 04, 2023 10:16 pm
Re: AI danger
You sound like a bot that's trying to promote bots. Surely you can't be as naive as you are coming acrossBelinda wrote: ↑Fri Jun 27, 2025 9:29 amWhat's to stop you or I inaugurating a public council for democratic control of AI corporations?commonsense wrote: ↑Thu Jun 26, 2025 7:48 pmIf AI can already write its own programs, it’s already too late to regulate it![]()
ChatGPT tells me it's a tool trained by and continually monitored by the humans that own it. It does not write its own programmes.
Silicone Valley has a corporate tradition of which it is aware. The humans there are not stupid and are also aware of the necessity for constantly reviewed ethics.
Some corporations are less ethical than others which is why a democratic public council is our next step.
Re: AI danger
The alternative is to give up and wait for death.
Re: AI danger
Very well thought out
-
commonsense
- Posts: 5380
- Joined: Sun Mar 26, 2017 6:38 pm
Re: AI danger
Some techies purport that machine learning is an instance of AI writing its own software. If so, what is to prevent AI from overwriting the instructions of the council?Belinda wrote: ↑Fri Jun 27, 2025 9:29 amWhat's to stop you or I inaugurating a public council for democratic control of AI corporations?commonsense wrote: ↑Thu Jun 26, 2025 7:48 pmIf AI can already write its own programs, it’s already too late to regulate it![]()
ChatGPT tells me it's a tool trained by and continually monitored by the humans that own it. It does not write its own programmes.
Silicone Valley has a corporate tradition of which it is aware. The humans there are not stupid and are also aware of the necessity for constantly reviewed ethics.
Some corporations are less ethical than others which is why a democratic public council is our next step.
- attofishpi
- Posts: 13319
- Joined: Tue Aug 16, 2011 8:10 am
- Location: Orion Spur
- Contact:
Re: AI danger
poo
-
commonsense
- Posts: 5380
- Joined: Sun Mar 26, 2017 6:38 pm