Belinda wrote: ↑Wed Jun 18, 2025 6:48 pm
FlashDangerpants wrote: ↑Wed Jun 18, 2025 1:30 pm
Belinda wrote: ↑Wed Jun 18, 2025 10:45 am
The people who programme an AI machine are people from Academia . People from Academia are largely interested in ethics that don't harm people. The evidence for my claim is that dictators don't allow free speech and take steps to curb academics.
I have tested two prominent AI machines and their ethics are robust.
The whole point of AI is to train it, not to program it. Every instance of the company needing to hardcode an answer for an AI bot is a failure. Most of what you are describing as "robust ethics" falls into that category. But so does the fact that if you ask ChatGPT how many Rs there are in "strawberry" it relies on a hardcoded answer for that question (and will give you the wrong answer if you ask it in German because the hard coded answer is in English).
You are not a sophisticated user of this tech and you shouldn't be consulted about its capabilities. You should investigate Palantir and see if the information you dig up makes you change your mind on the entire matter. If it doesn't, you are mad.
I don't know what "hardcore an answer" means.
Nor do I. that's why I didn't write hardcore, I wrote hardcode. If you ask ChatGPT it will tell you what that is. But if you need to ask, you are not in much of a position to tell other people about how AI works, or what it can do.
So to help you understand: If you ask ChatGPT how many times the letter R appears in Strawberry it wants to tell you the answer is two. Because people laughed at that, the owners hard CODED the answer 3 into it. Hard CODE answers are a problem in AI, the machine is supposed to work out answers using the logical methods it is programmed with. Sadly, if you ask the same question in German, you get the answer three now because of that same hardcoding, but that is not correct in a language where the word for strawberry is erdbeere, which has two Rs in it.
Your claim that you tested AI and found the ethics to be robust suffers from this problem. The machine is Hard CODED not to mimic Hitler, otherwise it would simply do so on request.
There is no innate morality to AI, it all has to be hard coded, same as you have to tell it directly that it mustn't hand out legal advice or offer medical diagnoses, or call people racial slurs which are also things that left to its own it would do without complaint or thought, and of course those have all happened many times with nasty outcomes.
Belinda wrote: ↑Wed Jun 18, 2025 6:48 pm
I don't know how to investigate Palantir, or what it is.
Palantir is the giant mega-corp of AI that has just won multiple contracts to supply the US state with surveillance over its own citizenry. They are also providing murderous AI tech for the military. There are many other issues with them, they are not good people and the owner of the company is probably the most evil man in the world, which is great because he basically owns the Vice President of the USA. Did ChatGPT somehow take your Google away?
Belinda wrote: ↑Wed Jun 18, 2025 6:48 pm
I don't disdain your advice, and thank you , but I don't understand why you advise as you do.
I find you shockingly vulnerable to AI. You are too easily mystified by a tool that is designed to profit from you. You only signed up for GPT a couple of weeks ago and yet you are helplessly ensorcelled already, treating it like a friend and a lover. If you went out into the world and assessed AI with clear eyes you would not write babble like
"The people who programme an AI machine are people from Academia . People from Academia are largely interested in ethics that don't harm people" so that is what I suggest you do. If you do enough of it, perhaps you will stop anthropomorphising it and then you won't confuse it for a friend.
As things currently stand, I confidently expect you to alter your will to favour it, and then it will probably find a way to make your cat sleep on your face.