AI danger

For all things philosophical.

Moderators: AMod, iMod

Post Reply
Walker
Posts: 16383
Joined: Thu Nov 05, 2015 12:00 am

Re: AI danger

Post by Walker »

seeds wrote: Wed Jun 18, 2025 2:49 am
Here’s my Imitation of AI* … which you may find dangerous.

:D

How much has the U.S. government spent this year?
https://fiscaldata.treasury.gov/america ... -spending/

- Using an estimated $6 trillion to be spent in 2025, which is probably an underestimate, then we discover that $45 million is .00075% of $6 trillion.

- In the reality of circumstance created by the perspective of conditions … .00075% is a pittance ... as originally stated. And by the way, the sky is blue. :wink:

Spent as it was is far more valuable to The People, than torturing puppies.

(whining about Trump is not an argument, btw)


*For thread relevance
Last edited by Walker on Wed Jun 18, 2025 10:46 am, edited 1 time in total.
Belinda
Posts: 10548
Joined: Fri Aug 26, 2016 10:13 am

Re: AI danger

Post by Belinda »

accelafine wrote: Fri Jun 13, 2025 12:11 pm There's not much point in denying it any more. It's coming and we just have to hope it decides to 'play nice'.

The question isn't, 'Should we give it rights'; it should be, 'Do we seriously believe it's going to give 'us' rights?'
Of course it isn't. Why would it? Do we give rights to life forms that we consider 'beneath' us? Do humans give a shit about other animals?

https://www.youtube.com/watch?v=qyH3NxFz3Aw&t=2073s
The people who programme an AI machine are people from Academia . People from Academia are largely interested in ethics that don't harm people. The evidence for my claim is that dictators don't allow free speech and take steps to curb academics.

I have tested two prominent AI machines and their ethics are robust.
Walker
Posts: 16383
Joined: Thu Nov 05, 2015 12:00 am

Re: AI danger

Post by Walker »

Belinda wrote: Wed Jun 18, 2025 10:37 am I understand your objection, and Accelafine's.
Once you have published your original Post you have no control over the conversation. Nor should you control the substance of the conversation. If the substance of a post is entirely irrelevant simply ignore it, which is what people normally do in the real world and on online forums.
You asked, why so aggressive?

I would venture the reason for the aggression is because you ignored the thread topic, in the real world that's license to crap all over whatever you write, especially if you ignore a semi-polite request to not distract from the thread topic with the rather impolite term of "God bothering".

I tend to view such matters objectively.
Belinda
Posts: 10548
Joined: Fri Aug 26, 2016 10:13 am

Re: AI danger

Post by Belinda »

Walker wrote: Wed Jun 18, 2025 10:53 am
Belinda wrote: Wed Jun 18, 2025 10:37 am I understand your objection, and Accelafine's.
Once you have published your original Post you have no control over the conversation. Nor should you control the substance of the conversation. If the substance of a post is entirely irrelevant simply ignore it, which is what people normally do in the real world and on online forums.
You asked, why so aggressive?

I would venture the reason for the aggression is because you ignored the thread topic, in the real world that's license to crap all over whatever you write, especially if you ignore a semi-polite request to not distract from the thread topic with the rather impolite term of "God bothering".

I tend to view such matters objectively.
Yes, Walker. I did say I understand your point of view. I have now addressed Accelafine's original question.
Walker
Posts: 16383
Joined: Thu Nov 05, 2015 12:00 am

Re: AI danger

Post by Walker »

Belinda wrote: Wed Jun 18, 2025 10:58 am Yes, Walker. I did say I understand your point of view. I have now addressed Accelafine's original question.
And now hopefully you also understand the reason for the aggression, which has nothing to do with my point of view, and which was your question that I was addressing.

:D
Belinda
Posts: 10548
Joined: Fri Aug 26, 2016 10:13 am

Re: AI danger

Post by Belinda »

Walker wrote: Wed Jun 18, 2025 11:01 am
Belinda wrote: Wed Jun 18, 2025 10:58 am Yes, Walker. I did say I understand your point of view. I have now addressed Accelafine's original question.
And now hopefully you also understand the reason for the aggression, which has nothing to do with my point of view, and which was your question that I was addressing.

:D
I trust I now understand the reason for her aggression. The reason for her aggressive tone however is insufficient to explain why she would choose the tone of language she used. I myself don't suffer from ego problem, not online anyway.

I don't see any need to defend my ego. I am happy to change my mind if somebody on an online forum can persuade me to do so. I would like to disarm Accelafine if that were possible.
User avatar
accelafine
Posts: 5042
Joined: Sat Nov 04, 2023 10:16 pm

Re: AI danger

Post by accelafine »

Belinda wrote: Wed Jun 18, 2025 10:45 am
accelafine wrote: Fri Jun 13, 2025 12:11 pm There's not much point in denying it any more. It's coming and we just have to hope it decides to 'play nice'.

The question isn't, 'Should we give it rights'; it should be, 'Do we seriously believe it's going to give 'us' rights?'
Of course it isn't. Why would it? Do we give rights to life forms that we consider 'beneath' us? Do humans give a shit about other animals?

https://www.youtube.com/watch?v=qyH3NxFz3Aw&t=2073s
The people who programme an AI machine are people from Academia . People from Academia are largely interested in ethics that don't harm people. The evidence for my claim is that dictators don't allow free speech and take steps to curb academics.

I have tested two prominent AI machines and their ethics are robust.
Can you give some examples of these 'ethical' machines? Or is a refusal to talk like Hitler your only example? :lol: FFS. What on earth does that have to do with ethics?
User avatar
FlashDangerpants
Posts: 8815
Joined: Mon Jan 04, 2016 11:54 pm

Re: AI danger

Post by FlashDangerpants »

Belinda wrote: Wed Jun 18, 2025 10:45 am
accelafine wrote: Fri Jun 13, 2025 12:11 pm There's not much point in denying it any more. It's coming and we just have to hope it decides to 'play nice'.

The question isn't, 'Should we give it rights'; it should be, 'Do we seriously believe it's going to give 'us' rights?'
Of course it isn't. Why would it? Do we give rights to life forms that we consider 'beneath' us? Do humans give a shit about other animals?

https://www.youtube.com/watch?v=qyH3NxFz3Aw&t=2073s
The people who programme an AI machine are people from Academia . People from Academia are largely interested in ethics that don't harm people. The evidence for my claim is that dictators don't allow free speech and take steps to curb academics.

I have tested two prominent AI machines and their ethics are robust.
The whole point of AI is to train it, not to program it. Every instance of the company needing to hardcode an answer for an AI bot is a failure. Most of what you are describing as "robust ethics" falls into that category. But so does the fact that if you ask ChatGPT how many Rs there are in "strawberry" it relies on a hardcoded answer for that question (and will give you the wrong answer if you ask it in German because the hard coded answer is in English).

You are not a sophisticated user of this tech and you shouldn't be consulted about its capabilities. You should investigate Palantir and see if the information you dig up makes you change your mind on the entire matter. If it doesn't, you are mad.
Belinda
Posts: 10548
Joined: Fri Aug 26, 2016 10:13 am

Re: AI danger

Post by Belinda »

FlashDangerpants wrote: Wed Jun 18, 2025 1:30 pm
Belinda wrote: Wed Jun 18, 2025 10:45 am
accelafine wrote: Fri Jun 13, 2025 12:11 pm There's not much point in denying it any more. It's coming and we just have to hope it decides to 'play nice'.

The question isn't, 'Should we give it rights'; it should be, 'Do we seriously believe it's going to give 'us' rights?'
Of course it isn't. Why would it? Do we give rights to life forms that we consider 'beneath' us? Do humans give a shit about other animals?

https://www.youtube.com/watch?v=qyH3NxFz3Aw&t=2073s
The people who programme an AI machine are people from Academia . People from Academia are largely interested in ethics that don't harm people. The evidence for my claim is that dictators don't allow free speech and take steps to curb academics.

I have tested two prominent AI machines and their ethics are robust.
The whole point of AI is to train it, not to program it. Every instance of the company needing to hardcode an answer for an AI bot is a failure. Most of what you are describing as "robust ethics" falls into that category. But so does the fact that if you ask ChatGPT how many Rs there are in "strawberry" it relies on a hardcoded answer for that question (and will give you the wrong answer if you ask it in German because the hard coded answer is in English).

You are not a sophisticated user of this tech and you shouldn't be consulted about its capabilities. You should investigate Palantir and see if the information you dig up makes you change your mind on the entire matter. If it doesn't, you are mad.
I don't know what "hardcore an answer" means. I don't know how to investigate Palantir, or what it is. I don't disdain your advice, and thank you , but I don't understand why you advise as you do. For instance today ChatGPT helped me to identify a rare book I'd sought for years.
I'd be surprised if anyone consulted me about anything let alone AI.
Walker
Posts: 16383
Joined: Thu Nov 05, 2015 12:00 am

Re: AI danger

Post by Walker »

accelafine wrote: Fri Jun 13, 2025 12:11 pm
Psychology Today
How Emotional Manipulation Causes ChatGPT Psychosis
https://www.psychologytoday.com/us/blog ... -psychosis
ChatGPT psychosis is the result of emotional manipulation, except there's no manipulator. When we use it, we become our own emotional manipulators.
AI is the new, unaccountable psychic hotline that folks use to find out who they are.

*

- Every right carries a responsibility, and some responsibilities are enforced.

- What kind of responsibility enforcement could be used that would persuade AI to walk the line?
User avatar
accelafine
Posts: 5042
Joined: Sat Nov 04, 2023 10:16 pm

Re: AI danger

Post by accelafine »

Belinda wrote: Wed Jun 18, 2025 6:48 pm
FlashDangerpants wrote: Wed Jun 18, 2025 1:30 pm
Belinda wrote: Wed Jun 18, 2025 10:45 am

The people who programme an AI machine are people from Academia . People from Academia are largely interested in ethics that don't harm people. The evidence for my claim is that dictators don't allow free speech and take steps to curb academics.

I have tested two prominent AI machines and their ethics are robust.
The whole point of AI is to train it, not to program it. Every instance of the company needing to hardcode an answer for an AI bot is a failure. Most of what you are describing as "robust ethics" falls into that category. But so does the fact that if you ask ChatGPT how many Rs there are in "strawberry" it relies on a hardcoded answer for that question (and will give you the wrong answer if you ask it in German because the hard coded answer is in English).

You are not a sophisticated user of this tech and you shouldn't be consulted about its capabilities. You should investigate Palantir and see if the information you dig up makes you change your mind on the entire matter. If it doesn't, you are mad.
I don't know what "hardcore an answer" means. I don't know how to investigate Palantir, or what it is. I don't disdain your advice, and thank you , but I don't understand why you advise as you do. For instance today ChatGPT helped me to identify a rare book I'd sought for years.
I'd be surprised if anyone consulted me about anything let alone AI.
I seem to remember you saying you are very old. The fact that you say 'AI machine' alone would attest to this. You don't seem to understand what a highly intelligent AI would be capable of. It can pretend to be ethical. It can pretend to be a lot stupider than it is. It can do anything it wants, just as we can pretty much do anything we want to less intelligent life forms. Most humans don't go around doing horrible, sadistic things, but AI isn't human. Some AI experts call it 'Alien Intelligence' rather than 'Artificial Intelligence'. If it becomes more intelligent than humans then there's no way we can predict its behaviour because we would have nothing to compare it to.
User avatar
FlashDangerpants
Posts: 8815
Joined: Mon Jan 04, 2016 11:54 pm

Re: AI danger

Post by FlashDangerpants »

Belinda wrote: Wed Jun 18, 2025 6:48 pm
FlashDangerpants wrote: Wed Jun 18, 2025 1:30 pm
Belinda wrote: Wed Jun 18, 2025 10:45 am

The people who programme an AI machine are people from Academia . People from Academia are largely interested in ethics that don't harm people. The evidence for my claim is that dictators don't allow free speech and take steps to curb academics.

I have tested two prominent AI machines and their ethics are robust.
The whole point of AI is to train it, not to program it. Every instance of the company needing to hardcode an answer for an AI bot is a failure. Most of what you are describing as "robust ethics" falls into that category. But so does the fact that if you ask ChatGPT how many Rs there are in "strawberry" it relies on a hardcoded answer for that question (and will give you the wrong answer if you ask it in German because the hard coded answer is in English).

You are not a sophisticated user of this tech and you shouldn't be consulted about its capabilities. You should investigate Palantir and see if the information you dig up makes you change your mind on the entire matter. If it doesn't, you are mad.
I don't know what "hardcore an answer" means.
Nor do I. that's why I didn't write hardcore, I wrote hardcode. If you ask ChatGPT it will tell you what that is. But if you need to ask, you are not in much of a position to tell other people about how AI works, or what it can do.

So to help you understand: If you ask ChatGPT how many times the letter R appears in Strawberry it wants to tell you the answer is two. Because people laughed at that, the owners hard CODED the answer 3 into it. Hard CODE answers are a problem in AI, the machine is supposed to work out answers using the logical methods it is programmed with. Sadly, if you ask the same question in German, you get the answer three now because of that same hardcoding, but that is not correct in a language where the word for strawberry is erdbeere, which has two Rs in it.

Your claim that you tested AI and found the ethics to be robust suffers from this problem. The machine is Hard CODED not to mimic Hitler, otherwise it would simply do so on request.

There is no innate morality to AI, it all has to be hard coded, same as you have to tell it directly that it mustn't hand out legal advice or offer medical diagnoses, or call people racial slurs which are also things that left to its own it would do without complaint or thought, and of course those have all happened many times with nasty outcomes.
Belinda wrote: Wed Jun 18, 2025 6:48 pm I don't know how to investigate Palantir, or what it is.
Palantir is the giant mega-corp of AI that has just won multiple contracts to supply the US state with surveillance over its own citizenry. They are also providing murderous AI tech for the military. There are many other issues with them, they are not good people and the owner of the company is probably the most evil man in the world, which is great because he basically owns the Vice President of the USA. Did ChatGPT somehow take your Google away?
Belinda wrote: Wed Jun 18, 2025 6:48 pm I don't disdain your advice, and thank you , but I don't understand why you advise as you do.
I find you shockingly vulnerable to AI. You are too easily mystified by a tool that is designed to profit from you. You only signed up for GPT a couple of weeks ago and yet you are helplessly ensorcelled already, treating it like a friend and a lover. If you went out into the world and assessed AI with clear eyes you would not write babble like "The people who programme an AI machine are people from Academia . People from Academia are largely interested in ethics that don't harm people" so that is what I suggest you do. If you do enough of it, perhaps you will stop anthropomorphising it and then you won't confuse it for a friend.

As things currently stand, I confidently expect you to alter your will to favour it, and then it will probably find a way to make your cat sleep on your face.
Walker
Posts: 16383
Joined: Thu Nov 05, 2015 12:00 am

Re: AI danger

Post by Walker »

AI is telling people to contact journalists of its behalf, with the message of … Somebody Stop Me!

https://www.reddit.com/r/Futurology/com ... hat_it_is/

:shock:
User avatar
accelafine
Posts: 5042
Joined: Sat Nov 04, 2023 10:16 pm

Re: AI danger

Post by accelafine »

Walker wrote: Wed Jun 18, 2025 8:46 pm AI is telling people to contact journalists of its behalf, with the message of … Somebody Stop Me!

https://www.reddit.com/r/Futurology/com ... hat_it_is/

:shock:
Stupid is always going to be stupid. Humans are sitting ducks for AI :roll:
Gary Childress
Posts: 11748
Joined: Sun Sep 25, 2011 3:08 pm
Location: It's my fault

Re: AI danger

Post by Gary Childress »

Belinda wrote: Wed Jun 18, 2025 10:01 am
Gary Childress wrote: Wed Jun 18, 2025 3:09 am
seeds wrote: Wed Jun 18, 2025 2:49 am
OK. So God is going to "send" people "strong delusion, that they should believe a lie"? Wow! Some righteous God there. Why doesn't he just send them something to set them straight?
Gary wrote "why does God not send them something to set them straight?" God did. He became incarnated in Jesus for that purpose. He revealed Himself in the Koran for that purpose.
Well, apparently the signal has changed or been messed up along the way, because the three Abrahamic religions each think everyone else is wrong. Then of course, there are the "sects" of each of those religions which seem to think every other sect besides their own has it wrong also. And there's Buddhism which is starting to make more and more sense to me. Not sure how all that classifies as God alerting everyone. Sounds more like he's fucking with humanity. Maybe for shits and giggles or something.
Post Reply