Page 2 of 2

Re: What should be done about uncontrolled AI?

Posted: Sun Feb 11, 2024 3:26 pm
by Sculptor
If the worry is that the government might use Artificial Intelligence, then at least there would, for the first time, be SOME intelligence in the government. Because the evidence is at the moment that there is precious little intelligence in government now.

CIA is an oxymoron.
With the emphasis on moron.

Re: What should be done about uncontrolled AI?

Posted: Sun Feb 11, 2024 7:46 pm
by Iwannaplato

Re: What should be done about uncontrolled AI?

Posted: Mon Feb 12, 2024 12:10 am
by Sculptor

Re: What should be done about uncontrolled AI?

Posted: Mon Feb 12, 2024 8:09 am
by Iwannaplato
The first link gives specific answers to your question. It is not sensationalist.
I certainly included some information that can be considered sensationalist, but I specifically chose sources that are not known as sensationalistic: Forbes, New York Times, Steven Hawking and a bunch of scientists, etc. I included a video that has two experts in the field describing AI learning skills that they were not asked to learn about. I think that is valuable information when considering dangers of AI and of course possible benefits. To me, however, it is important to realize that they can do that when considering risk/benefit analyses.

You asked why would it matter. I gave links to reasons why it would matter what corporations and governments are going with AI.

Re: What should be done about uncontrolled AI?

Posted: Thu Feb 15, 2024 8:18 pm
by commonsense
Sculptor wrote: Sun Feb 11, 2024 10:30 am
Wizard22 wrote: Sun Feb 11, 2024 8:58 am What should be done about uncontrolled AI?

Easy solution...
But should we pull the plug on all AI or should we retain AI for certain, presumably benign, purposes, eg, medical diagnosis?

Re: What should be done about uncontrolled AI?

Posted: Thu Feb 15, 2024 8:36 pm
by commonsense
I agree. Most responses given thus far outline a dystopian future where AI has become harmful and uncontrollable, but the essential question is what to do about it.

It seems that it is already too late to prevent AI from evolving into something horrific. At the very least there needs to be a temporary global moratorium on the use of all AI until an effective solution has been found.

The threat of malicious AI is so destructive to the human race that its potential for good is outweighed by its dangers. There may be no viable solution to the problem other than to disable all AI, and that would mean take making a temporary measure permanent.

But even a temporary ban would require a well vetted cyber force to monitor and impose real world penalties on all practitioners—clearly an imposing task.

If the ultimate goal is to restrain AI so that it can only act for the good of mankind, a la Asimov’s laws of robotics, there would need to be some sort of ethics committee to decide what is good and what is harmful. As for a permanent ban, I see no way to implement it.

Re: What should be done about uncontrolled AI?

Posted: Thu Feb 15, 2024 9:14 pm
by commonsense
Yes, but the answers are far from fail proof.

Legal regulations and industry standards need a system of enforcement to be effective.

Corporate culture and human perspectives training won’t protect the world from bad actors.

Re: What should be done about uncontrolled AI?

Posted: Thu Feb 15, 2024 9:31 pm
by Iwannaplato
commonsense wrote: Thu Feb 15, 2024 8:36 pm
I agree. Most responses given thus far outline a dystopian future where AI has become harmful and uncontrollable, but the essential question is what to do about it.

It seems that it is already too late to prevent AI from evolving into something horrific. At the very least there needs to be a temporary global moratorium on the use of all AI until an effective solution has been found.

The threat of malicious AI is so destructive to the human race that its potential for good is outweighed by its dangers. There may be no viable solution to the problem other than to disable all AI, and that would mean take making a temporary measure permanent.

But even a temporary ban would require a well vetted cyber force to monitor and impose real world penalties on all practitioners—clearly an imposing task.

If the ultimate goal is to restrain AI so that it can only act for the good of mankind, a la Asimov’s laws of robotics, there would need to be some sort of ethics committee to decide what is good and what is harmful. As for a permanent ban, I see no way to implement it.
It seems like you are agreeding the Sculptor, at least, what he writes is just prior to where you write 'I agree'. But you go on to agree, though you state it more certainly, with my concerns. I don't assume AI will head in dystopian directions, but I don't see enough caution about its dangers.