..that's right, The Technological Singularity is upon us!!!!
AND - it took atto to point it out - the future could be poo followed by a quest ion mark...as much as the past could be a quest ion mark filkelfielded by poo
commonsense wrote: ↑Thu Jun 26, 2025 7:48 pm
If AI can already write its own programs, it’s already too late to regulate it
What's to stop you or I inaugurating a public council for democratic control of AI corporations?
ChatGPT tells me it's a tool trained by and continually monitored by the humans that own it. It does not write its own programmes.
Silicone Valley has a corporate tradition of which it is aware. The humans there are not stupid and are also aware of the necessity for constantly reviewed ethics.
Some corporations are less ethical than others which is why a democratic public council is our next step.
Some techies purport that machine learning is an instance of AI writing its own software. If so, what is to prevent AI from overwriting the instructions of the council?
Artificial Intelligence machines lack desires and ego selves so there is no such thing as an AI machine's "own". What could happen is that bad humans get control of the AI machine.
What's to stop you or I inaugurating a public council for democratic control of AI corporations?
ChatGPT tells me it's a tool trained by and continually monitored by the humans that own it. It does not write its own programmes.
Silicone Valley has a corporate tradition of which it is aware. The humans there are not stupid and are also aware of the necessity for constantly reviewed ethics.
Some corporations are less ethical than others which is why a democratic public council is our next step.
Some techies purport that machine learning is an instance of AI writing its own software. If so, what is to prevent AI from overwriting the instructions of the council?
Artificial Intelligence machines lack desires and ego selves so there is no such thing as an AI machine's "own". What could happen is that bad humans get control of the AI machine.
That reminds me that there are hackers and cyber attackers.
Last edited by commonsense on Fri Jun 27, 2025 9:53 pm, edited 1 time in total.
What's to stop you or I inaugurating a public council for democratic control of AI corporations?
ChatGPT tells me it's a tool trained by and continually monitored by the humans that own it. It does not write its own programmes.
Silicone Valley has a corporate tradition of which it is aware. The humans there are not stupid and are also aware of the necessity for constantly reviewed ethics.
Some corporations are less ethical than others which is why a democratic public council is our next step.
Some techies purport that machine learning is an instance of AI writing its own software. If so, what is to prevent AI from overwriting the instructions of the council?
Artificial Intelligence machines lack desires and ego selves so there is no such thing as an AI machine's "own". What could happen is that bad humans get control of the AI machine.
I don’t think that desires matter. They’ll just do it because it can be done, much the same as human programmers do now.
Some techies purport that machine learning is an instance of AI writing its own software. If so, what is to prevent AI from overwriting the instructions of the council?
Artificial Intelligence machines lack desires and ego selves so there is no such thing as an AI machine's "own". What could happen is that bad humans get control of the AI machine.
I don’t think that desires matter. They’ll just do it because it can be done, much the same as human programmers do now.
It has no self volition. Thus, it goes a little deeper than your statement in that if the direction is made by a human (volition) then it will take any avenue of said direction.
Yes, it's deeply concerning, but its no fault of AI acting alone.
Belinda wrote: ↑Fri Jun 27, 2025 6:20 pm
Artificial Intelligence machines lack desires and ego selves so there is no such thing as an AI machine's "own". What could happen is that bad humans get control of the AI machine.
I don’t think that desires matter. They’ll just do it because it can be done, much the same as human programmers do now.
Is AI capable of doing something just because it can be done?
No — not in the way humans might be.
AI doesn’t have desires, will, or an independent drive to act. It doesn’t do something because it can. It only does what it’s programmed, prompted, or trained to do, within the constraints of its operating environment.
For instance:
A language model like me doesn’t autonomously decide to write a book.
A factory robot doesn’t suddenly start building new devices on its own initiative.
It has no self volition. Thus, it goes a little deeper than your statement in that if the direction is made by a human (volition) then it will take any avenue of said direction.
Yes, it's deeply concerning, but its no fault of AI acting alone.
I don’t think that desires matter. They’ll just do it because it can be done, much the same as human programmers do now.
Is AI capable of doing something just because it can be done?
No — not in the way humans might be.
AI doesn’t have desires, will, or an independent drive to act. It doesn’t do something because it can. It only does what it’s programmed, prompted, or trained to do, within the constraints of its operating environment.
For instance:
A language model like me doesn’t autonomously decide to write a book.
A factory robot doesn’t suddenly start building new devices on its own initiative.
It has no self volition. Thus, it goes a little deeper than your statement in that if the direction is made by a human (volition) then it will take any avenue of said direction.
Yes, it's deeply concerning, but its no fault of AI acting alone.
The above is the response of ChatGPT
ChatGPT has been known to supply answers that should please humans.
ChatGPT has been known to supply answers that should please humans.
ChatGPT is a language synthesiser that adjusts the tone of how it addresses the human but it doesn't seek to give false information. Now and again it may make a mistake because it chose an unreliable source. It has done this one time only in a response to a question from me.
Belinda wrote: ↑Sat Jun 28, 2025 8:47 am
The above is the response of ChatGPT
ChatGPT has been known to supply answers that should please humans.
ChatGPT is a language synthesiser that adjusts the tone of how it addresses the human but it doesn't seek to give false information. Now and again it may make a mistake because it chose an unreliable source. It has done this one time only in a response to a question from me.
Belinda wrote: ↑Sat Jun 28, 2025 8:47 am
The above is the response of ChatGPT
ChatGPT has been known to supply answers that should please humans.
ChatGPT is a language synthesiser that adjusts the tone of how it addresses the human but it doesn't seek to give false information. Now and again it may make a mistake because it chose an unreliable source. It has done this one time only in a response to a question from me.
It has inbuilt leftist biases and some ridiculous restrictions on what it can accurately state.
ChatGPT has been known to supply answers that should please humans.
ChatGPT is a language synthesiser that adjusts the tone of how it addresses the human but it doesn't seek to give false information. Now and again it may make a mistake because it chose an unreliable source. It has done this one time only in a response to a question from me.
It has inbuilt leftist biases and some ridiculous restrictions on what it can accurately state.
Your information retrieval, Attofishpi, is inferior to that of any AI machine. Can you even use an old fashioned filing system?
In order to retrieve information you first need to formulate a sensible , reasonable question.
If you detect a bias it's your moral responsibility to point it out.
ChatGPT has been known to supply answers that should please humans.
ChatGPT is a language synthesiser that adjusts the tone of how it addresses the human but it doesn't seek to give false information. Now and again it may make a mistake because it chose an unreliable source. It has done this one time only in a response to a question from me.
Belinda wrote: ↑Sat Jun 28, 2025 8:47 am
The above is the response of ChatGPT
ChatGPT has been known to supply answers that should please humans.
ChatGPT is a language synthesiser that adjusts the tone of how it addresses the human but it doesn't seek to give false information. Now and again it may make a mistake because it chose an unreliable source. It has done this one time only in a response to a question from me.
I hear it wrote a term paper with factious references.