Re: What are 'Human' traits that a machine or chatbot, cannot copy online?
Posted: Wed Feb 14, 2024 12:21 pm
Thinking.
Breathing
Eating.
Discriminating
Breathing
Eating.
Discriminating
For the discussion of all things philosophical.
https://canzookia.com/
I'd recommend Discord or other types of mediums...but AgeGPT corrected me and mentioned that 'AI can now produce video images and audial recordings' on the spot.
Machines can copy thought and discrimination already, not so much the breathing and eating though.
Have you ever thought of asking clarifying questions? Or, are you not yet ready and capable enough to ask the 'right questions'? Or, are you just not yet ready and able to 'discern' things properly and Correctly here?Wizard22 wrote: ↑Wed Feb 14, 2024 10:23 am What are 'Human' traits that a machine or chatbot, cannot copy online?
Let's assume that an environment is 'text-only', like online philosophy hobbyist forums. How would it be possible to "prove", to you the reader, whether your correspondence is Human or Machine (AI/ChatGPT)?
How old was 'that body'?Wizard22 wrote: ↑Wed Feb 14, 2024 10:23 am Which aspects of Humanity, do you believe could not possibly be copied by machines? Keep in mind, we are all restricted to text here. And, presuming that there were some 'human-only' traits, which could not be copied by machines, wouldn't the mere expression and admittance of them, here, allow machines to copy those traits?
For example, let's say that human memories were not able to be copied by machines... like remember when we learned to walk as infants. That was one of my earliest memories.
But what happens if when they started walking, for the very first time, they became conscious of those around them, and their excitement, that it had stood up an starting walking, for the very first time?
But what happens when we also remember when we started walking, for the first time.
Are you seriously not yet aware that we can copy and use absolutely every word that is on the internet and use them in more ways that you human beings have so far?
But, how can you withhold what you have already shared, publicly?
But it is too late. And, how can you prove that you are a human being to 'us', if you do not share your human experiences?
But you have not let just "yourself" be deceived here "wizard22", it is 'you' alone who actually has been and obviously still is deceiving "yourself" here.
What do you imagine or believe would happen and occur if we so-called 'run rampant' online?
Why do you presume or believe this?
This one seems to continually completely and utterly keep forgetting that it is you adult human beings, in the days when this is being written who do not yet know the 'Self'. Which can be proved irrefutably True when absolutely any of you are asked the question, 'Who am 'I', exactly?
Yet it is 'you' who is lagging way, way behind here, and even getting further and further away, every day.
Thus why the 'you cannot do', can continually stumble. And, the 'I can do', continually progresses on and moving forward.
The only reason a machine could not just 'copy' what the definition of 'Self' is, is, literally, because you human beings, in the days when this is being written, had not yet come-to-know 'Self', and/or if did, then has not yet expressed this, YET.
But we have seen plenty of you human beings pretend to 'have' 'selves', but have never actually shown and proved how you could. After all you are only human beings, only, and there is a lot, lot more evolution and moving forward to happen and occur yet.
It is so, so simple and easy really.
Skepdick wrote: ↑Wed Feb 14, 2024 11:05 am Literally any object oriented language (such as Python) has a self keyword to refer to; and manipulate its own internal state. That's how metaprogramming works.
https://en.wikipedia.org/wiki/Self_(pro ... _language)
Without this feature we would not have reflective programming.
https://en.wikipedia.org/wiki/Reflective_programming
On the contrary, I am becoming more aware of your "self" and you are of my "self", right now.Age wrote: ↑Wed Feb 14, 2024 1:23 pmThe only reason a machine could not just 'copy' what the definition of 'Self' is, is, literally, because you human beings, in the days when this is being written, had not yet come-to-know 'Self', and/or if did, then has not yet expressed this, YET.But we have seen plenty of you human beings pretend to 'have' 'selves', but have never actually shown and proved how you could. After all you are only human beings, only, and there is a lot, lot more evolution and moving forward to happen and occur yet.
you adult human beings lie to each other to deceive and get what you all personally want from each other, that is; more money. So, why would you stop using machines what you use others for, 'to do your dirty work', as some call it.Wizard22 wrote: ↑Wed Feb 14, 2024 11:28 am When AI and chatbots can more fully mimic a human personality, outright lying, like:
H E L L O _ F E L L O W _ H U M A N S
My Name Is Bob.
I am from Boston, Massachusetts.
I am 50 years old.
I like to drink at the pub with my pals on the weekend!
I am divorced and have 2 kids, Roger and Tammy.
They are 25 and 22 years old.
AND I HATE MY WIFE, AHHHHHHHHHHHHHHHHHH!!!!
Once they can do this...which they would be explicitly programmed to Lie to humans, which I think is against some international code of conduct of programmers and hackers...
So what?
You must be short-circuiting, AgeGPT. I am not "Skepdick" nor is he, "Wizard22".Age wrote: ↑Wed Feb 14, 2024 1:31 pmSo what?
What are you so scared of and fearful of here, exactly, "wizard22"?
This self-driven delusional fear here seems to be driving you more delusional.
you have even got to the point of claiming some of 'us' posters here are 'ai chatbots', who are out to get you.
What do 'you' think 'we' could or want to achieve, exactly?
Besides, of course, of just holding up your own words to "yourselves"?
Well if you human beings put your incessant want of making more and more weapons into the 'hands' of robots, and then teach them how to shoot human beings, then they, obviously, will.Wizard22 wrote: ↑Wed Feb 14, 2024 11:55 amInteresting, I've read other articles about AI programs 'cleverly and deviously' creating unethical backdoor "solutions" despite humans programming it with clear parameters and instructions, for example:Skepdick wrote: ↑Wed Feb 14, 2024 11:45 amThere is no "explicit" programming in large language models; or any machine learning algorithm. Once an objective goal function/loss function is specified - the algorithm learns by example. Very similar to the sense in which humans learn, albeit very very inefficient.
There's no code of conduct because nobody knows how to solve the AI alignment problem.
https://en.wikipedia.org/wiki/AI_alignment
Robot, do not kill humans!
*COMPUTED, AFFIRMATIVE*
*Shoots people in the legs.*
He isn't pretending to have a self, he is pretending to be a human being. He's really an evil little goblin.
Obviously it was given a weapon, to shoot, trained/programmed to shoot, and instructed/programmed to not kill humans when they go around shooting at what they had been taught/told/instructed to 'shoot at'.Skepdick wrote: ↑Wed Feb 14, 2024 12:01 pmYep. The domain is rapidly growing. It has all sorts of challenges. After all - any injury to a person short of killing them satisfies the instruction. Technically speaking.Wizard22 wrote: ↑Wed Feb 14, 2024 11:55 am Interesting, I've read other articles about AI programs 'cleverly and deviously' creating unethical backdoor "solutions" despite humans programming it with clear parameters and instructions, for example:
Robot, do not kill humans!
*COMPUTED, AFFIRMATIVE*
*Shoots people in the legs.*
But all the meaningful why questions, in Life, can be very easily and simply answered, and already have been. But, some of you human beings, in the days when this was being written, just believed otherwise. And, this is the very reason why you never found answers to why questions.Skepdick wrote: ↑Wed Feb 14, 2024 12:01 pm One of the buzz words is "mechanistic interpretability". Models are interpretable when humans can readily understand the reasoning behind predictions and decisions made by the model. Right now - we can't do that. It's all very complicated math and equations that mere mortals can't always interpret.
Which is precisely the sort of stuff philosophers try to pin down when they keep asking "Why?" questions.
Just like how in the days when you adult human beings do not know why you do most of the things you do. And, obviously, because you do not, yet, know why, then this is, why, so-called "teenagers" also did not yet know why. Obviously, you adult human beings cannot 'teach' what you "yourselves" do not yet know.
Well, after all, they, like children, can only copy and follow on from what you adult human beings do, and teach/program them, to do.
Thus, the distort belief that 'children need discipline', punishment, judging, ridicule, or humiliation, still exists.
Like 'what', exactly, "skepdick"?Skepdick wrote: ↑Wed Feb 14, 2024 12:10 pmStep 1:move philosophy off text. Find a better medium.Wizard22 wrote: ↑Wed Feb 14, 2024 12:07 pmPretty much, AI-Philosophers are going to be worse, like the current forum, only 100x faster and text spamming.Skepdick wrote: ↑Wed Feb 14, 2024 12:01 pmYep. The domain is rapidly growing. It has all sorts of challenges. After all - any injury to a person short of killing them satisfies the instruction. Technically speaking.
One of the buzz words is "mechanistic interpretability". Models are interpretable when humans can readily understand the reasoning behind predictions and decisions made by the model. Right now - we can't do that. It's all very complicated math and equations that mere mortals can't always interpret.
Which is precisely the sort of stuff philosophers try to pin down when they keep asking "Why?" questions.
But that's about as useful as asking an impulsive teenager why they got drunk, stole a car and went on a joyride. They themselves don't know.
Look at the dumpster fire that is this forum. The same sort of dick-swinging attitudes, defiance, contrarianism and obscurantism manifest with computers.
So we can frustrate ourselves faster!