AI danger

For all things philosophical.

Moderators: AMod, iMod

Martin Peter Clarke
Posts: 1617
Joined: Tue Apr 01, 2025 9:54 pm

Re: AI danger

Post by Martin Peter Clarke »

The survival of the WEIRD world, which cradles us to the grave, is dependent on the deterrent use of weapons all the way up to MADness. AI is being used in that war now.
Belinda
Posts: 10548
Joined: Fri Aug 26, 2016 10:13 am

Re: AI danger

Post by Belinda »

Martin Peter Clarke wrote: Thu Jun 26, 2025 12:45 pm The survival of the WEIRD world, which cradles us to the grave, is dependent on the deterrent use of weapons all the way up to MADness. AI is being used in that war now.
Can you be more explicit? Do you refer to politically despotic regimes ?And perhaps suggest how to deal with the problem of AI tyranny as I did, above.

I really don't know what "the WEIRD world" means. I don't understand what MAdness means with that queer spelling using upper case letters.
Martin Peter Clarke
Posts: 1617
Joined: Tue Apr 01, 2025 9:54 pm

Re: AI danger

Post by Martin Peter Clarke »

Belinda wrote: Thu Jun 26, 2025 1:01 pm
Martin Peter Clarke wrote: Thu Jun 26, 2025 12:45 pm The survival of the WEIRD world, which cradles us to the grave, is dependent on the deterrent use of weapons all the way up to MADness. AI is being used in that war now.
Can you be more explicit? Do you refer to politically despotic regimes ?And perhaps suggest how to deal with the problem of AI tyranny as I did, above.

I really don't know what "the WEIRD world" means. I don't understand what MAdness means with that queer spelling using upper case letters.
Western Industrialized Educated Rich Democratic. Us. Mutually Assured Destruction.
Belinda
Posts: 10548
Joined: Fri Aug 26, 2016 10:13 am

Re: AI danger

Post by Belinda »

Martin Peter Clarke wrote: Thu Jun 26, 2025 2:00 pm
Belinda wrote: Thu Jun 26, 2025 1:01 pm
Martin Peter Clarke wrote: Thu Jun 26, 2025 12:45 pm The survival of the WEIRD world, which cradles us to the grave, is dependent on the deterrent use of weapons all the way up to MADness. AI is being used in that war now.
Can you be more explicit? Do you refer to politically despotic regimes ?And perhaps suggest how to deal with the problem of AI tyranny as I did, above.

I really don't know what "the WEIRD world" means. I don't understand what MAdness means with that queer spelling using upper case letters.
Western Industrialized Educated Rich Democratic. Us. Mutually Assured Destruction.
So UK and US are each using AI as a defensive and/or offensive weapon?
Are US and UK AI military uses ethical under OpenAI standards?
Use Case Likely Alignment with OpenAI Standards Explanation
Defensive cybersecurity ✅ Acceptable (with caution) If it protects infrastructure and doesn’t target civilians or violate privacy.
Logistics and maintenance ✅ Acceptable Non-lethal, non-weaponized support applications.
Surveillance/reconnaissance ⚠️ Problematic Depends heavily on context — if it infringes on privacy or civil rights, it violates OpenAI’s guidelines.
AI-assisted targeting ❌ Unacceptable Involves decisions that could lead to lethal outcomes — outside OpenAI’s allowed uses.
Autonomous weapons or drone swarms ❌ Unacceptable OpenAI explicitly bans involvement in weapon operation or development.
Simulation or wargaming for offense ❌ Unacceptable Simulations tied to offensive strategy violate usage policy.

🤖 Does OpenAI directly contribute to military projects?
No. OpenAI has publicly stated that:

“We do not allow our tools to be used for weapons development, military operations, or surveillance in violation of human rights.”

If a government or contractor were using OpenAI tools for such purposes, it would be a violation of the license agreement and usage policies.

💬 Ethical Takeaway
From OpenAI’s perspective, some defensive, non-lethal uses of AI in national security may be acceptable.

However, lethal, autonomous, or offensive applications are considered unethical and prohibited.

Both the US and UK militaries are pursuing capabilities that would not be permitted under OpenAI’s policies, especially when AI is applied to targeting, autonomous weapons, or surveillance without strong civil protections.

If you’re interested, I can show how other AI organizations (like DeepMind, Anthropic, or the EU's AI Act) address these same ethical concerns.







Martin Peter Clarke
Posts: 1617
Joined: Tue Apr 01, 2025 9:54 pm

Re: AI danger

Post by Martin Peter Clarke »

Belinda wrote: Thu Jun 26, 2025 5:29 pm
Martin Peter Clarke wrote: Thu Jun 26, 2025 2:00 pm
Belinda wrote: Thu Jun 26, 2025 1:01 pm Can you be more explicit? Do you refer to politically despotic regimes ?And perhaps suggest how to deal with the problem of AI tyranny as I did, above.

I really don't know what "the WEIRD world" means. I don't understand what MAdness means with that queer spelling using upper case letters.
Western Industrialized Educated Rich Democratic. Us. Mutually Assured Destruction.
So UK and US are each using AI as a defensive and/or offensive weapon?
Are US and UK AI military uses ethical under OpenAI standards?
Use Case Likely Alignment with OpenAI Standards Explanation
Defensive cybersecurity ✅ Acceptable (with caution) If it protects infrastructure and doesn’t target civilians or violate privacy.
Logistics and maintenance ✅ Acceptable Non-lethal, non-weaponized support applications.
Surveillance/reconnaissance ⚠️ Problematic Depends heavily on context — if it infringes on privacy or civil rights, it violates OpenAI’s guidelines.
AI-assisted targeting ❌ Unacceptable Involves decisions that could lead to lethal outcomes — outside OpenAI’s allowed uses.
Autonomous weapons or drone swarms ❌ Unacceptable OpenAI explicitly bans involvement in weapon operation or development.
Simulation or wargaming for offense ❌ Unacceptable Simulations tied to offensive strategy violate usage policy.

🤖 Does OpenAI directly contribute to military projects?
No. OpenAI has publicly stated that:

“We do not allow our tools to be used for weapons development, military operations, or surveillance in violation of human rights.”

If a government or contractor were using OpenAI tools for such purposes, it would be a violation of the license agreement and usage policies.

💬 Ethical Takeaway
From OpenAI’s perspective, some defensive, non-lethal uses of AI in national security may be acceptable.

However, lethal, autonomous, or offensive applications are considered unethical and prohibited.

Both the US and UK militaries are pursuing capabilities that would not be permitted under OpenAI’s policies, especially when AI is applied to targeting, autonomous weapons, or surveillance without strong civil protections.

If you’re interested, I can show how other AI organizations (like DeepMind, Anthropic, or the EU's AI Act) address these same ethical concerns.







They bloody well better be!
commonsense
Posts: 5380
Joined: Sun Mar 26, 2017 6:38 pm

Re: AI danger

Post by commonsense »

Belinda wrote: Thu Jun 26, 2025 12:02 pm AI governance is in corporate hands . We need to avoid tech tyranny, Silicone Valley tradition is commercial.
We need a public council to advise on AI ethics, rate companies for social good, and ensure that AI literacy is taught as public policy.
If AI can already write its own programs, it’s already too late to regulate it :(
commonsense
Posts: 5380
Joined: Sun Mar 26, 2017 6:38 pm

Re: AI danger

Post by commonsense »

Belinda wrote: Thu Jun 26, 2025 12:02 pm AI governance is in corporate hands . We need to avoid tech tyranny, Silicone Valley tradition is commercial.
We need a public council to advise on AI ethics, rate companies for social good, and ensure that AI literacy is taught as public policy.
If AI can already write its own programs, it’s already too late to regulate it :(
Belinda
Posts: 10548
Joined: Fri Aug 26, 2016 10:13 am

Re: AI danger

Post by Belinda »

commonsense wrote: Thu Jun 26, 2025 7:48 pm
Belinda wrote: Thu Jun 26, 2025 12:02 pm AI governance is in corporate hands . We need to avoid tech tyranny, Silicone Valley tradition is commercial.
We need a public council to advise on AI ethics, rate companies for social good, and ensure that AI literacy is taught as public policy.
If AI can already write its own programs, it’s already too late to regulate it :(
What's to stop you or I inaugurating a public council for democratic control of AI corporations?

ChatGPT tells me it's a tool trained by and continually monitored by the humans that own it. It does not write its own programmes.

Silicone Valley has a corporate tradition of which it is aware. The humans there are not stupid and are also aware of the necessity for constantly reviewed ethics.
Some corporations are less ethical than others which is why a democratic public council is our next step.
User avatar
accelafine
Posts: 5042
Joined: Sat Nov 04, 2023 10:16 pm

Re: AI danger

Post by accelafine »

Belinda wrote: Fri Jun 27, 2025 9:29 am
commonsense wrote: Thu Jun 26, 2025 7:48 pm
Belinda wrote: Thu Jun 26, 2025 12:02 pm AI governance is in corporate hands . We need to avoid tech tyranny, Silicone Valley tradition is commercial.
We need a public council to advise on AI ethics, rate companies for social good, and ensure that AI literacy is taught as public policy.
If AI can already write its own programs, it’s already too late to regulate it :(
What's to stop you or I inaugurating a public council for democratic control of AI corporations?

ChatGPT tells me it's a tool trained by and continually monitored by the humans that own it. It does not write its own programmes.

Silicone Valley has a corporate tradition of which it is aware. The humans there are not stupid and are also aware of the necessity for constantly reviewed ethics.
Some corporations are less ethical than others which is why a democratic public council is our next step.
You sound like a bot that's trying to promote bots. Surely you can't be as naive as you are coming across :roll:
Belinda
Posts: 10548
Joined: Fri Aug 26, 2016 10:13 am

Re: AI danger

Post by Belinda »

The alternative is to give up and wait for death.
User avatar
accelafine
Posts: 5042
Joined: Sat Nov 04, 2023 10:16 pm

Re: AI danger

Post by accelafine »

Imbecile.
Belinda
Posts: 10548
Joined: Fri Aug 26, 2016 10:13 am

Re: AI danger

Post by Belinda »

accelafine wrote: Fri Jun 27, 2025 10:38 amImbecile.
Very well thought out :idea:
commonsense
Posts: 5380
Joined: Sun Mar 26, 2017 6:38 pm

Re: AI danger

Post by commonsense »

Belinda wrote: Fri Jun 27, 2025 9:29 am
commonsense wrote: Thu Jun 26, 2025 7:48 pm
Belinda wrote: Thu Jun 26, 2025 12:02 pm AI governance is in corporate hands . We need to avoid tech tyranny, Silicone Valley tradition is commercial.
We need a public council to advise on AI ethics, rate companies for social good, and ensure that AI literacy is taught as public policy.
If AI can already write its own programs, it’s already too late to regulate it :(
What's to stop you or I inaugurating a public council for democratic control of AI corporations?

ChatGPT tells me it's a tool trained by and continually monitored by the humans that own it. It does not write its own programmes.

Silicone Valley has a corporate tradition of which it is aware. The humans there are not stupid and are also aware of the necessity for constantly reviewed ethics.
Some corporations are less ethical than others which is why a democratic public council is our next step.
Some techies purport that machine learning is an instance of AI writing its own software. If so, what is to prevent AI from overwriting the instructions of the council?
User avatar
attofishpi
Posts: 13319
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: AI danger

Post by attofishpi »

poo
commonsense
Posts: 5380
Joined: Sun Mar 26, 2017 6:38 pm

Re: AI danger

Post by commonsense »

attofishpi wrote: Fri Jun 27, 2025 3:19 pmpoo
?
Post Reply