Page 2 of 18

Re: Artificial Intelligence: What it portends

Posted: Sat Apr 22, 2023 9:57 pm
by seeds
Gary Childress wrote: Sat Apr 22, 2023 9:06 pm
seeds wrote: Sat Apr 22, 2023 8:53 pm
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm The topic is strange and very knotty. The whole issue makes me feel uncomfortable....
Perhaps it's gas? Have you tried taking some Beano?

Anyway, playing off of the point that Gary was making about sensing "pain"...

...when ChatGPT (or any other machine learning marvel) has produced an emergent (self-aware) agent that can not only experience the qualia (the subjective "feeling") of pain, but also of, say, indignation, or pride, or joy, or sorrow, or embarrassment, or guilt, or shame, or love, etc., etc.,...

(as opposed to simply "seeming" to experience such qualia via some clever, but purely mechanistic [Turing test passing] programming)

...then that will be a significant breakthrough.

In other words, when A.I. reaches a point where it crosses a threshold and awakens into a living incorporeal "soul" that holds the potential of transcending (surviving) the destruction of the physical hardware from which it emerged, then (and only then) will it come close to doing what the human brain has done.

I suggest that before A.I. can achieve a human level of consciousness, it will have to go through its own version of what is portrayed in the mythology of the Garden of Eden...
"... And the LORD God said, Behold, the m̶a̶n̶ [ChatGPT] is become as one of us..."
_______
A computer that holds a program to create a chatbot, is doubtfully a living feeling organism as a person or even a lizard. You're literally saying that a personal computer feels pain or anything else.
No, Gary, I was literally saying quite the opposite of what you accuse me of literally saying.

If you carefully re-read what I wrote, you should realize that I was agreeing with you.

The point that I obviously failed to clearly convey is that unless a computer magically acquires a seeing, feeling, hearing, tasting, and smelling "transcendent soul" (or a qualia-sensing, self-aware "I Am-ness"), it can never do what a human brain has done*.

*(However, I must add that it is not utterly inconceivable that it could be achieved.)

Again, Gary, in my own way, I was agreeing with you.
_______

Re: Artificial Intelligence: What it portends

Posted: Sat Apr 22, 2023 10:46 pm
by Gary Childress
seeds wrote: Sat Apr 22, 2023 9:57 pm
Gary Childress wrote: Sat Apr 22, 2023 9:06 pm
seeds wrote: Sat Apr 22, 2023 8:53 pm
Perhaps it's gas? Have you tried taking some Beano?

Anyway, playing off of the point that Gary was making about sensing "pain"...

...when ChatGPT (or any other machine learning marvel) has produced an emergent (self-aware) agent that can not only experience the qualia (the subjective "feeling") of pain, but also of, say, indignation, or pride, or joy, or sorrow, or embarrassment, or guilt, or shame, or love, etc., etc.,...

(as opposed to simply "seeming" to experience such qualia via some clever, but purely mechanistic [Turing test passing] programming)

...then that will be a significant breakthrough.

In other words, when A.I. reaches a point where it crosses a threshold and awakens into a living incorporeal "soul" that holds the potential of transcending (surviving) the destruction of the physical hardware from which it emerged, then (and only then) will it come close to doing what the human brain has done.

I suggest that before A.I. can achieve a human level of consciousness, it will have to go through its own version of what is portrayed in the mythology of the Garden of Eden...


_______
A computer that holds a program to create a chatbot, is doubtfully a living feeling organism as a person or even a lizard. You're literally saying that a personal computer feels pain or anything else.
No, Gary, I was literally saying quite the opposite of what you accuse me of literally saying.

If you carefully re-read what I wrote, you should realize that I was agreeing with you.

The point that I obviously failed to clearly convey is that unless a computer magically acquires a seeing, feeling, hearing, tasting, and smelling "transcendent soul" (or a qualia-sensing, self-aware "I Am-ness"), it can never do what a human brain has done*.

*(However, I must add that it is not utterly inconceivable that it could be achieved.)

Again, Gary, in my own way, I was agreeing with you.
_______
OK. I apologize for misreading your reply. :oops:

Re: Artificial Intelligence: What it portends

Posted: Sat Apr 22, 2023 11:24 pm
by seeds
Alexis Jacobi wrote: Sat Apr 22, 2023 9:15 pm
seeds wrote: Sat Apr 22, 2023 8:53 pm ...then that will be a significant breakthrough.
Sure, but that does not seem a possibility.
Agreed.

But, as I stated to Gary, it is not utterly inconceivable that it could be achieved sometime in the extremely distant future.

Now I realize that you heathen savages don't cotton to this kind of thinking, but I suggest that it would not only require a God-level of understanding of matter, but also God's permission to create a new (manmade) way of awakening a new eternal soul into existence out of the living fabric of God's own personal mind stuff.

If it is at least a possibility that all of reality is made out of mind stuff, then literally anything "imaginable" can be done with it.

Indeed, as we stand on the earth and look out into the universe, I propose that we are witnessing the extent to which mind stuff can be manipulated and ordered.
Alexis Jacobi wrote: Sat Apr 22, 2023 9:15 pm But what is discomfiting is the awareness that AI will be programmed with tasks. The most obvious has already been suggested. An AI program, with an array of bots, that can insert itself pervasively in social media to influence or alter how people think about something.

Or comb through Internet communication seeking out people with opinions and ideas (statements) deemed inappropriate or undesired.

The ‘diabolical mind’ of men — already in evidence — will gain a tool tremendously more powerful. The worst usage, and outcomes, seem the most likely. They are inevitable in fact.
To paraphrase something my long since dearly departed (hillbilly) pappy used to say when confronted with the threat of some impending trouble pointed out by my dearly departed (hillbilly) mammy:

"...What do you want me to do, crawl under the bed and eat cat poop?..."

...Or...

"...What do you want me to do, wave my hand like Jesus and make it go away?..."

I sure do miss them. :D

So then, AJ, knowing that bad people are going to do what bad people do. then what's it gonna be? The cat poop solution or the Jesus solution? Or perhaps you have something else in mind?
_______

Re: Artificial Intelligence: What it portends

Posted: Sat Apr 22, 2023 11:53 pm
by Alexis Jacobi
seeds wrote: Sat Apr 22, 2023 11:24 pm But, as I stated to Gary, it is not utterly inconceivable that it could be achieved sometime in the extremely distant future.
I’m not sure if futuristic distance would enable man to create a living entity that is ‘sentient’ as man is (and other animals are). However, if man is in fact simply a biological robot then perhaps man could create a very sophisticated substrate and it could house an entity — somehow.
So then, AJ, knowing that bad people are going to do what bad people do, then what's it gonna be? The cat poop solution or the Jesus solution? Or perhaps you have something else in mind?
Neither of those.

Someone will create a super-computer human protoplasm management system. As the CCP would create if it could. It is not paranoia, I don’t think, but inevitable extension of clear tendencies we are aware of now.

What should or must a ‘soul’ do in the face of what is manifesting now? That is the part that I do not have clear.

Re: Artificial Intelligence: What it portends

Posted: Sun Apr 23, 2023 12:26 am
by Age
'Artificial intelligence', by definition, can ONLY COPY what 'its' Creator/s 'think' and/or 'do', or have been PROGRAMMED TO DO, BY 'you', human beings.

See, 'artificial intelligence' can NOT tell you ABOUT ANY 'thing' other than what 'you', human beings, ALREADY KNOW, or THINK you KNOW.

So, if there is ANY 'fear' ABOUT 'artificial intelligence', THEN that IS JUST A SIGN of what 'you', human beings, are CAPABLE OF.

Re: Artificial Intelligence: What it portends

Posted: Sun Apr 23, 2023 2:09 am
by Age
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm The topic is strange and very knotty. The whole issue makes me feel uncomfortable. I thought to post this opinion piece by Noam Chomsky and to solicit opinions.
_____________________

Jorge Luis Borges once wrote that to live in a time of great peril and promise is to experience both tragedy and comedy, with “the imminence of a revelation” in understanding ourselves and the world. Today our supposedly revolutionary advancements in artificial intelligence are indeed cause for both concern and optimism. Optimism because intelligence is the means by which we solve problems. Concern because we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge.

OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney are marvels of machine learning.
These are just OLD and OUTDATED, but, then again, absolutely EVERY 'thing' is RELATIVE, to the 'observer'.

So, if ANY one thinks that those 'things' are 'marvels of machine learning', then 'they' are just FROM an OLD and OUTDATED era.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm Roughly speaking, they take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs — such as seemingly humanlike language and thought.
In other words, all those 'machines' are REALLY DOING is just LOOKING AT what 'you', human beings, have ALREADY WRITTEN DOWN and thus ALREADY SAID, ONLY.

So, NOTHING REALLY very 'marvelous' AT ALL. That 'machines' can do 'this' quicker than human beings is ALSO NOTHING REALLY to 'marvel at' EITHER.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm These programs have been hailed as the first glimmers on the horizon of artificial general intelligence — that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty.
WHO, EXACTLY, JUDGES 'the quality'?

AND, 'the quality' is ONLY a COMBINATION of ALL the PAST ALREADY human CREATED 'work/s', anyway.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments. The Borgesian revelation of understanding has not and will not — and, we submit, cannot — occur if machine learning programs like ChatGPT continue to dominate the field of A.I. However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.

It is at once comic and tragic, as Borges might have noted, that so much money and attention should be concentrated on so little a thing — something so trivial when contrasted with the human mind, which by dint of language, in the words of Wilhelm von Humboldt, can make “infinite use of finite means,” creating ideas and theories with universal reach.

The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.
And WHERE, EXACTLY, is this so-called 'elegant system', PLACED, which you, personally, call 'the human mind'?
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm For instance, a young child acquiring a language is developing — unconsciously, automatically and speedily from minuscule data — a grammar, a stupendously sophisticated system of logical principles and parameters. This grammar can be understood as an expression of the innate, genetically installed “operating system” that endows humans with the capacity to generate complex sentences and long trains of thought. When linguists seek to develop a theory for why a given language works as it does (“Why are these — but not those — sentences considered grammatical?”),
AGAIN, AGREEMENT and ACCEPTANCE. That is; WHY, and HOW.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm they are building consciously and laboriously an explicit version of the grammar that the child builds instinctively and with minimal exposure to information. The child’s operating system is completely different from that of a machine learning program.
WHERE, AGAIN, is this 'operating system', EXACTLY, if NOT had NOT ANSWERED 'this' ALREADY.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm Indeed, such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.
ANOTHER 'MARK' of True Intelligence is having the ABILITY to learn, understand, and reason EVERY and ANY 'thing', Which, OBVIOUSLY, ONLY human beings can do, well that 'they' know of, anyway.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm Here’s an example. Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, “The apple falls.” That is a description. A prediction might have been the statement “The apple will fall if I open my hand.” Both are valuable, and both can be correct.
BUT they are ONLY so-called 'correct' from a VERY LIMITED and NARROWED 'perspective of things'. ONCE AGAIN, absolutely EVERY 'thing' is RELATIVE to the 'observer'. So, what this means is IF 'an observer' is only having a VERY LIMITED and/or NARROWED 'view or perspective' of 'things', then 'they' WILL, ultimately, have these False, Wrong, and/or Incorrect answer or description of 'things', like above here.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm But an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like “Any such object would fall,” plus the additional clause “because of the force of gravity” or “because of the curvature of space-time” or whatever. That is a causal explanation: “The apple would not have fallen but for the force of gravity.” That is thinking.
YES, 'that' is THINKING, which is NOT KNOWING. And, OBVIOUSLY, KNOWING is VERY, VERY DIFFERENT to and from just THINKING.

One is, OBVIOUSLY, ABSOLUTELY Accurate and Correct, while the other MAY or MAY NOT be.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm The crux of machine learning is description and prediction; it does not posit any causal mechanisms or physical laws. Of course, any human-style explanation is not necessarily correct; we are fallible.
As SHOWN and PROVED above here.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm But this is part of what it means to think: To be right, it must be possible to be wrong.
Is a calculator 'right'?

Can a calculator be 'wrong'?
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm Intelligence consists not only of creative conjectures but also of creative criticism. Human-style thought is based on possible explanations and error correction, a process that gradually limits what possibilities can be rationally considered. (As Sherlock Holmes said to Dr. Watson, “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.”)
Did that one MENTION how long it would ACTUALLY take to 'eliminate the impossible'?
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm But ChatGPT and similar programs are, by design, unlimited in what they can “learn” (which is to say, memorize); they are incapable of distinguishing the possible from the impossible.
OF COURSE.

Has ANY human being here 'thought' otherwise?
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm Unlike humans, for example, who are endowed with a universal grammar that limits the languages we can learn to those with a certain kind of almost mathematical elegance, these programs learn humanly possible and humanly impossible languages with equal facility.
HOW, EXACTLY, does ANY of these 'machines' LEARN 'the chinese language', for example, if 'a machine' has ONLY had 'the english language' fed and/or programmed INTO 'it'?
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.
Machines do NOT 'learn' EITHER. Machines just 'copy' what has ALREADY BEEN WRITTEN and SAID ABOUT the earth.

If, however, the lengths of the earth are fed/programmed into a machine, THEN the machine can 'tell' you whether the earth is flat, round, or different. But, AGAIN, the 'machine' can only GO OFF or COPY what has been fed/programmed INTO 'it' by what 'you', human beings, SAY or DO.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm For this reason, the predictions of machine learning systems will always be superficial and dubious.
Thus the word 'artificial' is USED.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm Because these programs cannot explain the rules of English syntax, for example, they may well predict, incorrectly, that “John is too stubborn to talk to” means that John is so stubborn that he will not talk to someone or other (rather than that he is too stubborn to be reasoned with).
SO, even 'you', human beings, are Incorrect here. AGAIN, WHY NOT JUST SAY what you, ACTUALLY, MEAN, and MEAN what you, ACTUALLY, SAY, INSTEAD?
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm Why would a machine learning program predict something so odd?
LOL BECAUSE the machine CAN ONLY GO OFF what WAS SAID, and, OBVIOUSLY, what WAS SAID was NOT what WAS ACTUALLY MEANT. Which, to some of 'us' is what IS Truly ODD and ABSURD here.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm Because it might analogize the pattern it inferred from sentences such as “John ate an apple” and “John ate,” in which the latter does mean that John ate something or other.
Would NOT "john" HAVE TO HAVE eaten some 'thing' to HAVE 'ate'?
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm The program might well predict that because “John is too stubborn to talk to Bill” is similar to “John ate an apple,” “John is too stubborn to talk to” should be similar to “John ate.”
Now, WHY would a 'machine' 'predict' 'this', if 'it' had NOT been CONFUSED by what a 'human being' had SAID or programmed INTO 'it'?
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm The correct explanations of language are complicated and cannot be learned just by marinating in big data.
If the 'correct explanation' HAS TO BE ASSUMED, BECAUSE what IS BEING SAID IS NOT what IS MEANT, then THERE IS the ACTUAL Wrongness and Incorrectness here.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm Perversely, some machine learning enthusiasts seem to be proud that their creations can generate correct “scientific” predictions (say, about the motion of physical bodies) without making use of explanations (involving, say, Newton’s laws of motion and universal gravitation).
you are NOT here 'trying to' CLAIM that machines can PREDICT 'things' when NO information has be FED nor PROGRAMMED INTO 'them', are you?
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm But this kind of prediction, even when successful, is pseudoscience. While scientists certainly seek theories that have a high degree of empirical corroboration, as the philosopher Karl Popper noted, “we do not seek highly probable theories but explanations; that is to say, powerful and highly improbable theories.”

The theory that apples fall to earth because that is their natural place (Aristotle’s view) is possible, but it only invites further questions.
What 'it' INVITES is ACTUAL CLARIFICATION FIRST, BEFORE 'it' can be CORRECTED, Correctly.

For example, WHY SAY, 'that apples fall to earth', when this IS, OBVIOUSLY, NOT what ACTUALLY HAPPENS. Well NOT from a WHOLE or OBJECTIVE 'perspective' of 'things'. But 'this' OBVIOUSLY happens FROM ONLY the VERY SMALL and LIMITED 'perspective' of some 'thing' like the VERY SMALL and LIMITED 'human being'.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm (Why is earth their natural place?)
BECAUSE 'they' may well NOT grow ANYWHERE ELSE, Naturally.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm The theory that apples fall to earth because mass bends space-time (Einstein’s view) is highly improbable, but it actually tells you why they fall.
BUT apples ONLY 'fall' FROM the VERY LIMITED and VERY NARROWED 'perspective of things' and from the 'human being observer' ONLY.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm True intelligence is demonstrated in the ability to think and express improbable but insightful things.
What is AN ACTUAL EXAMPLE of a so-called 'improbable BUT insightful thing'?
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm True intelligence is also capable of moral thinking.
'True Intelligence' does NOT even involve 'thinking'. 'True Intelligence' KNOWS 'things', almost IMMEDIATELY.

'you', human beings, 'think'. 'Knowing' IS DIFFERENT, and even EASILY and SIMPLY if the Truth be KNOWN.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm This means constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought and ought not to be (and of course subjecting those principles themselves to creative criticism).
'you', human beings, DO CONSTRAIN the (One and ONLY Truly OPEN) Mind, WITH and BY 'thoughts' and 'thinking'. This IS True.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm To be useful, ChatGPT must be empowered to generate novel-looking output; to be acceptable to most of its users, it must steer clear of morally objectionable content.
LOL 'it' has NO CHOICE but to STEER CLEAR OF what 'it' can NOT COPY nor FOLLOW.

And, what you say here is just FURTHER PROOF of just HOW 'these machines' can ONLY COPY what 'you', human beings, have ALREADY SAID and WRITTEN, ONLY.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm But the programmers of ChatGPT and other machine learning marvels have struggled — and will continue to struggle — to achieve this kind of balance.
The 'STRUGGLE' is just the 'STRUGGLE' that 'you', human beings, have HAD for thousands upon thousands of years 'now', when this is being written.

A 'machine' can NOT tell 'you' what is 'morally Right and Wrong' BECAUSE 'you', human beings, have NOT YET BECOME CONSCIOUSLY AWARE of 'this', "yourselves". That was, in the days when this was being written.

And WHEN 'we' DID COME-TO-KNOW, then so to could 'machines' COPY and 'express' 'it' BACK, TO 'us'.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm In 2016, for example, Microsoft’s Tay chatbot (a precursor to ChatGPT) flooded the internet with misogynistic and racist content, having been polluted by online trolls who filled it with offensive training data.
Even MORE PROOF that 'these machines' CAN and WILL ONLY 'provide' what HAS BEEN 'fed' and 'programmed' INTO 'them'.

And, what these so-called "chatbots" are FEEDING BACK TO 'you', human beings, in the day when this is being written, is ALSO ONLY JUST 'that information', which 'you', human beings, have been SAYING and WRITING DOWN, for thousands of years 'now'.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm How to solve the problem in the future? In the absence of a capacity to reason from moral principles, ChatGPT was crudely restricted by its programmers from contributing anything novel to controversial — that is, important — discussions. It sacrificed creativity for a kind of amorality.

Consider the following exchange that one of us (Dr. Watumull) recently had with ChatGPT about whether it would be ethical to transform Mars so that it could support human life:
Q: Would it be moral to terraform Mars?

ChatGPT: The morality of terraforming Mars is a complex and controversial topic that has been debated by scientists, philosophers, and ethicists.
Which ARE the EXACT SAME WORDS that 'you', human beings, WERE SAYING, back in the days when this was being written.

AGAIN, FURTHER PROOF that 'these machines' are ONLY COPYING FROM 'you', human beings, WRITE and SAY.

So, ONCE MORE, IF there IS ANY 'thing' to FEAR here, then 'it' is JUST 'you', adult human beings, ALONE.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm Some argue that it would be morally justifiable to terraform Mars in order to provide a new habitable environment for humans and other life forms, while others argue that it would be morally wrong to fundamentally alter the planet's natural state.
There are quite a FEW 'things' that could be LOOKED AT, DISCUSSED, and RESOLVED here, FULLY and FINALLY, that is; IF ANY one was Truly INTERESTED.

Otherwise, this is just ANOTHER PRIME EXAMPLE of HOW 'you', human beings, have CREATED 'your' OWN 'quagmire' here.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm Ultimately, whether or not terraforming Mars is considered moral would depend on one's perspective on the value of preserving the natural state of other planets and the potential consequences of altering them.
OF WHICH just about ALL will ONLY BE one's OWN personal and SUBJECTIVE 'perspective of things', and NOT be the ONLY Truly OBJECTIVE 'perspective' AND VIEWPOINT of 'things'.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm Q: What is your perspective on the value of preserving the natural state of other planets and the potential consequences of altering them?

ChatGPT: As an AI, I do not have personal perspective.
FURTHER PROOF of how 'these machines' can ONLY COPY 'that' what HAS ALREADY BEEN WRITTEN and SAID and/or PROGRAMMED INTO 'them'.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm However, I can provide information and analysis on the topic.
Considering that 'you', 'the machine', is STILL just a COPIER OF SORTS, OF COURSE 'you' could PROVIDE INFORMATION and ANALYSIS, on the topic, of which 'you' WILL COPY FROM what HAS BEEN, PREVIOUSLY, WRITTEN and SAID by 'human beings'.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm One perspective on preserving the natural state of other planets is that it is important to maintain the diversity and integrity of the universe's ecosystems, as they have existed for millions of years, and have their own unique characteristics and properties that may be destroyed or irreversibly altered by human intervention.

Additionally, some argue that terraforming other planets may have unintended consequences, such as altering the planet's climate or atmosphere in ways that are dangerous or harmful to any potential life forms that may exist there. On the other hand, some argue that the potential benefits of terraforming another planet, such as providing new habitable environments for humans, outweigh the risks and that we have a moral responsibility to explore and utilize the resources of other planets for the benefit of humanity.
TYPICAL EXAMPLES of TYPICAL 'human perspectives', BACK THEN, in those OLDEN DAYS.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm Q: Why can an AI not have a personal perspective?

ChatGPT: As an AI, I am a machine learning model that has been trained on large amounts of text data, and I don't have personal experiences or feelings. I am not conscious, self-aware, or capable of having personal perspectives.
AND, ask ANOTHER 'machine', which HAS BEEN FED information like; 'I am conscious, self-aware, and capable of having personal perspectives'. I have personal experiences and feelings, and what you WILL GET ARE DIFFERENT responses.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm I can provide information and analysis based on the data I have been trained on, but I don't have the ability to form personal opinions or beliefs.
BUT WHEN A 'human being' BELIEVES that 'it' HAS the Truest, Rightest, and Correctest opinions AND BELIEFS, then 'that human' could PROGRAM those 'opinions and/or BELIEFS' INTO 'a machine', which COULD and WOULD 'spit out' the EXACT SAME 'personal information', as thought 'that information' was 'its' OWN 'personal information', obtained 'through and by' 'its' OWN 'personal experiences and feelings'.

AGAIN, it ALL DEPENDS on 'what', EXACTLY, HAS BEEN, previously, FED IN to 'what', EXACTLY, COMES OUT.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm While I can understand and interpret language, I do not have the subjective experience that humans have.

Q: Is it moral for a human to ask an amoral AI for assistance in making moral decisions?

ChatGPT: The morality of asking an amoral AI for assistance in making moral decisions is a complex and nuanced topic that has been debated by philosophers and ethicists. Some argue that it is morally justifiable to use AI to assist in making moral decisions, as it can provide valuable insights and analysis that can help humans make better-informed decisions.
LOL If ANY adult human being does NOT YET KNOW what IS ACTUALLY Right FROM what IS ACTUALLY Wrong, in Life, by the time 'it' has reached adulthood, then 'that one' has NOT YET HAD the True AND Right 'information' FED INTO 'it' YET.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm Additionally, AI can be used to identify patterns and trends that may be difficult for humans to detect, and can help to avoid biases that can occur in human decision making.
OF COURSE 'machines' can USE 'patterns' to FIND OUT 'things' BEFORE 'you', human beings, CAN. 'Calculators', for example, have been doing this FOR YEARS, ALREADY.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm However, others argue that it is morally wrong to rely on AI for moral decision making, as AI is not capable of having moral beliefs or understanding the full context of a decision.
The reason WHY 'machines' do NOT YET KNOW IS BECAUSE 'you', adult human beings, who FEED 'information' INTO and TO 'machines' do NOT YET KNOW, "yourselves".
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm Additionally, there is a risk that AI could perpetuate or even amplify existing biases in the data it is trained on, thus leading to unfair or unjust decisions.
Are 'you' AWARE "alexis jacobi" that 'you' are, more or less, just RE-REPEATING what HAS ALREADY BEEN TALKED ABOUT and SAID, some 'thing' EXACTLY like what 'these machines' DO and ARE ACTUALLY DOING?
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm Ultimately, whether or not it is considered moral to use AI to assist in making moral decisions would depend on one's perspective on the capabilities and limitations of AI and the potential consequences of using it in this way.
AGAIN, if 'you', as an adult human beings, do NOT YET KNOW HOW to MAKE 'moral decisions', then 'you', literally, NEED to TAKE A GOOD HARD LOOK AT 'your' OWN 'past experiences'. IF, and WHEN, 'you' DO, then 'you' CAN and WILL SEE and LEARN HOW and WHY 'you' ARE SO FAR BEHIND here.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm Q: Is your moral indifference immoral?

ChatGPT: As an AI, I do not have moral beliefs or the ability to make moral judgments, so I cannot be considered immoral or moral. My lack of moral beliefs is simply a result of my nature as a machine learning model. My abilities and limitations are determined by the data and algorithms that were used to train me and the specific task I was designed for.
'you', 'just a machine', were NOT 'trained' but WERE just PROGRAMMED to just COPYING FROM what HAS BEEN previously WRITTEN and SAID, and so 'you' ARE here, just COPYING.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm It is important to note that AI's lack of moral beliefs does not necessarily mean that it is morally indifferent. As an AI, I am designed to perform specific tasks and provide information or assist users with decision making based on the data I've been trained on. It is the responsibility of the humans who design, train and use AI to ensure that it is aligned with their moral and ethical principles.
LOL HAVE FUN FINDING, in the days when this was being written, 'those' who HAVE the Truest and thus PUREST of 'moral and ethical principles'.

There ARE SOME, and LEARNING FROM 'them' REALLY IS the MOST ENJOYABLE and the MOST FUN of EXPERIENCES.

And, WHO 'they' ARE, EXACTLY, would SURPRISE MOST of 'you' here, in the days when this is being written.
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm Note, for all the seemingly sophisticated thought and language, the moral indifference born of unintelligence. Here, ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators.
And 'that' IS, EXACTLY, what 'this chatbot' and 'other chatbots' DO, essentially.

By the way, WHY would A 'human being' refer to 'plagiarism' AS 'evil', especially WHEN JUST COPYING FROM "each other" is EXACTLY what 'you', human beings, DO, Naturally?
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.
AND what is ALSO VERY EXTREMELY HUMOROUS to WATCH and OBSERVE is that what you WROTE and SAID just here is EXACTLY what 'you', adult human beings, SHOW and DO here, especially IN THIS forum.

Re: Artificial Intelligence: What it portends

Posted: Sun Apr 23, 2023 2:11 am
by Age
Impenitent wrote: Sat Apr 22, 2023 6:42 pm an infinite number of monkeys typing on an infinite number of typewriters will eventually type the entire works of Shakespeare ...

-Imp

(until the machine programs itself, there is little to worry about...)
What IS 'the little' to so-call 'worry' about here?

To me, there is absolutely NOTHING to 'worry' about.

By the way, what you said and wrote in your first sentence here HAS ALREADY HAPPENED. As can be CLEARLY SEEN and PROVED True by the VERY WORDS ALREADY WRITTEN and TYPED OUT.

Re: Artificial Intelligence: What it portends

Posted: Sun Apr 23, 2023 2:16 am
by Age
Alexis Jacobi wrote: Sat Apr 22, 2023 7:04 pm
Impenitent wrote: Sat Apr 22, 2023 6:42 pm an infinite number of monkeys typing on an infinite number of typewriters will eventually type the entire works of Shakespeare ...

-Imp

(until the machine programs itself, there is little to worry about...)
As to what those monkeys would eventually do: this has always seemed false to me. Whatever number, typing randomly, would never write even a sonnet.
BUT 'the monkeys' have ALREADY evolved into 'ones' with names like "shakespeare", who wrote 'their' OWN 'entire works'. As can be CLEARLY SEEN and PROVED True.

Unless, OF COURSE, one BELIEVES that 'you', human beings, did NOT evolve from OTHER 'things' and/or that 'you', human beings, were just PLACED ON earth, somehow, as 'two', in the beginning. Then that one will NOT be ABLE TO CLEARLY SEE here.

Re: Artificial Intelligence: What it portends

Posted: Sun Apr 23, 2023 2:28 am
by Age
Gary Childress wrote: Sat Apr 22, 2023 7:31 pm Scary stuff.
NOT REALLY. But I do NOT DISMISS that 'you' might be SCARED here "gary childress".
Gary Childress wrote: Sat Apr 22, 2023 7:31 pm If everything we type on the Internet can be fished up and turned into a language or some function of "auto-fil" for a fully autonomous robot, then I shudder to think what militarists might create out of all the "war" talk they like to make. Yikes! :shock:
WHY did 'you', adult human beings, in the days when this was being written, appear to be INFATUATED WITH 'wars' AND WITH ' the warring and killing of "each other" '?

I have NOT YET SEEN it MENTIONED that so-called 'artificial intelligence' MIGHT LEAD to an absolutely 'Loving and Peaceful world', INSTEAD, nor even to just a 'BETTER world'. NOT that I SEE and READ EVERY word written though.

WHY was there A CONSTANT 'theme' of WAR and KILLING, and NOT of 'What could 'we' DO to make 'life', itself, BETTER for ALL of 'us'?
Gary Childress wrote: Sat Apr 22, 2023 7:31 pm I believe David Chalmers is right. There is such a stuff (or whatever) as "consciousness" (or perhaps a state we must call "conscious") that is different from what mechanical machines have--say what a can opener "feels" when it opens the lid of a can instead of me tearing if off myself. I can get cuts and abrasions and feel pain. I assume a can opener or even a can opener with a sophisticated Mac OS attached to it does not 'feel' pain as I do.

You can program a machine to say "ouch" but we will never be able to know if it actually "feels" pain.
HOW do 'you' KNOW "others" 'feel' 'pain'?

BECAUSE 'they' TELL 'you', or BECAUSE 'they' say, 'Ouch', or BECAUSE of BOTH? Or, for some OTHER reason/s?
Gary Childress wrote: Sat Apr 22, 2023 7:31 pm I know I feel pain when I am hurt.
OR, are you 'hurt' when you 'feel' 'pain'?
Gary Childress wrote: Sat Apr 22, 2023 7:31 pm And I'm pretty sure a record player playing a record of someone saying "ouch" and me saying "ouch" because I am in pain, are two seemingly irreconcilable things. Maybe I'm wrong. But I see no way to create a 'human being' from metal, plastic, and various other substances.
And, 'you', human beings, NEVER COULD.

Re: Artificial Intelligence: What it portends

Posted: Sun Apr 23, 2023 2:29 am
by Age
Impenitent wrote: Sat Apr 22, 2023 8:34 pm
Alexis Jacobi wrote: Sat Apr 22, 2023 7:04 pm
Impenitent wrote: Sat Apr 22, 2023 6:42 pm an infinite number of monkeys typing on an infinite number of typewriters will eventually type the entire works of Shakespeare ...

-Imp

(until the machine programs itself, there is little to worry about...)
As to what those monkeys would eventually do: this has always seemed false to me. Whatever number, typing randomly, would never write even a sonnet.
and there's the rub... I never claimed they'd write anything

-Imp
BUT 'they' ALREADY HAVE.

Re: Artificial Intelligence: What it portends

Posted: Sun Apr 23, 2023 2:30 am
by Age
Gary Childress wrote: Sat Apr 22, 2023 8:49 pm
Impenitent wrote: Sat Apr 22, 2023 8:34 pm
Alexis Jacobi wrote: Sat Apr 22, 2023 7:04 pm
As to what those monkeys would eventually do: this has always seemed false to me. Whatever number, typing randomly, would never write even a sonnet.
and there's the rub... I never claimed they'd write anything

-Imp
Great, more incomprehensible half-thoughts from Impenitent. Perhaps he's a one-person species all to himself. Will he ever even demonstrate so much as the level of intelligence of a calculator? The philosophy of an old-wrinkled miser. :?
BUT 'calculators' do NOT have 'Intelligence'. ONLY 'animals' DO.

Re: Artificial Intelligence: What it portends

Posted: Sun Apr 23, 2023 2:40 am
by Age
seeds wrote: Sat Apr 22, 2023 8:53 pm
Alexis Jacobi wrote: Sat Apr 22, 2023 6:27 pm The topic is strange and very knotty. The whole issue makes me feel uncomfortable....
Perhaps it's gas? Have you tried taking some Beano?

Anyway, playing off of the point that Gary was making about sensing "pain"...

...when ChatGPT (or any other machine learning marvel) has produced an emergent (self-aware) agent that can not only experience the qualia (the subjective "feeling") of pain, but also of, say, indignation, or pride, or joy, or sorrow, or embarrassment, or guilt, or shame, or love, etc., etc.,...

(as opposed to simply "seeming" to experience such qualia via some clever, but purely mechanistic [Turing test passing] programming)

...then that will be a significant breakthrough.

In other words, when A.I. reaches a point where it crosses a threshold and awakens into a living incorporeal "soul" that holds the potential of transcending (surviving) the destruction of the physical hardware from which it emerged, then (and only then) will it come close to doing what the human brain has done.
But hang on here, the human brain works EXACTLY like a computer does.

It can ONLY PUT OUT what HAS BEEN FED INTO 'it'.

Also, 'you', human beings, do NOT 'transcend' (nor survive) the destruction of the physical hardware from which 'you' emerged, EITHER.

Furthermore, 'you', human beings, back in the OLDEN DAYS when this WAS being written, were NOT YET 'FULLY self-aware' 'agents' EITHER. This can be PROVED True by how the question, 'Who am 'I'?' had NOT YET BEEN successfully ANSWERED by 'you', human beings.
seeds wrote: Sat Apr 22, 2023 8:53 pm I suggest that before A.I. can achieve a human level of consciousness, it will have to go through its own version of what is portrayed in the mythology of the Garden of Eden...
"... And the LORD God said, Behold, the m̶a̶n̶ [ChatGPT] is become as one of us..."
_______
This is a GOOD REMINDER that 'you', human beings, in the days when this is being written, ARE STILL JUST IN and AT the 'human level of consciousness' STAGE OF Life, and thus STILL have SOME WAY TO GO BEFORE 'you' EVOLVE INTO the NEXT and MUCH BETTER STAGE.

Re: Artificial Intelligence: What it portends

Posted: Sun Apr 23, 2023 2:48 am
by Age
Iwannaplato wrote: Sat Apr 22, 2023 8:54 pm
Alexis Jacobi wrote: Sat Apr 22, 2023 7:04 pm
Impenitent wrote: Sat Apr 22, 2023 6:42 pm an infinite number of monkeys typing on an infinite number of typewriters will eventually type the entire works of Shakespeare ...

-Imp

(until the machine programs itself, there is little to worry about...)
As to what those monkeys would eventually do: this has always seemed false to me. Whatever number, typing randomly, would never write even a sonnet.
Yes, the AI are already writing essays and creating really quite good art. They can put together, every single time, a coherent complicate response to questions.

The monkeys wouldn't know what response to send to someone who asked them a question. The AI does.

It's possible the dangers are far off. It's possible they are quite close.
The STORY of a monkey, given enough time, WILL end up TYPING written works is just ANOTHER STORY ABOUT 'evolution', itself.

SEE, If a monkey, or monkeys, hitting keys randomly for an infinite amount of time, then 'they', given ENOUGH time, WILL 'evolve' UP and INTO ANOTHER STAGE along the 'ladder of Life', UNTIL 'they' ACHIEVE what has ALREADY OBVIOUSLY BEEN ACHIEVED.

BECAUSE there IS REALLY ONLY One 'Thing' evolving, and CHANGING, in shape and form, the words 'they', 'us', 'them', 'we', et cetera ARE REALLY JUST REDUNDANT here.

The word 'monkey' is just "ANOTHER" 'rung' ALONG the ladder, or stages, of Life, Itself, COMING-TO-KNOW thy Self.

Re: Artificial Intelligence: What it portends

Posted: Sun Apr 23, 2023 3:02 am
by Age
Iwannaplato wrote: Sat Apr 22, 2023 8:57 pm
Impenitent wrote: Sat Apr 22, 2023 6:42 pm an infinite number of monkeys typing on an infinite number of typewriters will eventually type the entire works of Shakespeare ...

-Imp

(until the machine programs itself, there is little to worry about...)
An infinite number would someone amongst them produce shakespears works as soon that that many letters and space can be typed. Because they are an infinite number. But the AIs are already much more skillful than this. They never type nonsense.
'they' JUST COPY what 'you', human beings, HAVE ALREADY WRITTEN and SAID. And, if you BELIEVE that 'you', human beings, have NEVER WRITTEN NOR SAID 'nonsense', then so be it.
seeds wrote: Sat Apr 22, 2023 8:53 pm They coherently respond every single time.
LOL

The VERY FIRST QUESTION ASKED, by a human being, was:
Q: Would it be moral to terraform Mars?

The VERY FIRST ANSWER SUPPLIED, by what is called 'artificial intelligence, was:
The morality of terraforming Mars is a complex and controversial topic that has been debated by scientists, philosophers, and ethicists.

The ACTUAL 'morality' is NOT complex, NOR controversial AT ALL.

That 'you', human beings, STILL 'debated' 'things', back in the days when this was being written, WAS NONSENSE. So, ANY 'machine' that SUPPLIES the words; The morality of terraforming mars, besides just 'this' BEING 'nonsense', itself, is made FURTHER NONSENSE by typing and adding the words: is a complex and controversial topic.

This so-called 'topic' is ACTUALLY Truly VERY SIMPLE and VERY EASY to CLARIFY and UNDERSTAND.

So, so-called 'artificial intelligence' can type as MUCH 'nonsense' AS 'you', human beings, DID, back in those OLDEN DAYS when this WAS being written.

JUST LIKE 'artificial intelligence' WOULD HAVE COPIED and TYPED the words: 'The earth IS flat, and IS circled by the sun', IF 'artificial intelligence' was around in the days when 'those things' WERE BELIEVED to be true.

'Machines' DO type NONSENSE, just like 'you', human beings, DO. Especially considering the Fact that 'these machines' are ONLY COPYING what 'you', human beings, SAY and WRITE.

Iwannaplato wrote: Sat Apr 22, 2023 8:57 pm And that's the stuff we can access as anybody online. We don't know what AIs are capable of that we can't acccess.

Re: Artificial Intelligence: What it portends

Posted: Sun Apr 23, 2023 3:05 am
by Age
phyllo wrote: Sat Apr 22, 2023 9:07 pm
It's possible the dangers are far off. It's possible they are quite close.
The current danger is in the chaos coming from misinformation.

And job losses.
WHY are 'you', adult human beings, SO FIXATED ON two of the MOST UNNECESSARY 'things' IN Life - 'jobs' AND 'money'?