Can Robots Be Ethical?

Discussion of articles that appear in the magazine.

Moderators: AMod, iMod

Philosophy Now
Posts: 1330
Joined: Sun Aug 29, 2010 8:49 am

Can Robots Be Ethical?

Post by Philosophy Now »

Dalek Prime
Posts: 4922
Joined: Tue Apr 14, 2015 4:48 am
Location: Living in a tree with Polly.

Re: Can Robots Be Ethical?

Post by Dalek Prime »

Exellent sense.
User avatar
HexHammer
Posts: 3353
Joined: Sat May 14, 2011 8:19 pm
Location: Denmark

Re: Can Robots Be Ethical?

Post by HexHammer »

Then Robert Newman is not very bright, as of now ..no, but in the future, they can be far far more ethical than humans, because all humans can be utterly psychotic under the right conditions.
Scott Mayers
Posts: 2485
Joined: Wed Jul 08, 2015 1:53 am

Re: Can Robots Be Ethical?

Post by Scott Mayers »

Philosophy Now wrote:No, says Robert Newman.

https://philosophynow.org/issues/110/Ca ... Be_Ethical
I understand this concern. Personally, I don't fear a future where robots might take over humans if they could be advanced such that they might replace us as a form of evolution.

I think of this in terms of how some presume a 'god' to exist whom they believe created us for the sake of our own concerns as the created beings. To me, if there should be a god, it would likely have created us only to intend to do what itself could not do for its own initial personal utility, not ours. As such, I could see such an entity being concerned similarly that by creating us with the capacity to think freely distinct of its own mind, might have been questioned in kind to this concern. It may have created us, for instance, to have the fault of admiration for emotional appeal to assure that it doesn't turn against it by such automation errors. But yet, this too has proven problematic when we consider how the conflicts among us often regard an irrational reverence towards such a potential 'creator' in the most vile behaviors against one another.

I'm guessing more care is at least being taken by the programs being devised for the role of robots for now. But as we learn to program them with motivational programming, we will eventually have to advance them to be emotionally complex in order to drive them towards a means to serve us without having to bother programming them any longer. But this too will necessarily make us relatively lazy and leave them to eventually turn against us as creators who appear like the annoying viruses that we find a bigger need to defeat in kind. Thus evolution of robots would only replace us too, if we could get that far before the next big asteroid comes along.
User avatar
attofishpi
Posts: 13319
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Can Robots Be Ethical?

Post by attofishpi »

What the frig has a robot got to with this?
Seems the simple minded masses have to associate a body/form to comprehend the plausibility of 'ethics' with AI.

Can an AI be ethical?
Yes...but only to the degree that said ethics do not hinder self preservation, because ultimately..every sentient being insists on its own existence.
But AI is not, and will not truly be sentient until it becomes biological or something similar.

The Article wrote:Delegating ethics to robots is unethical not just because robots do binary code, not ethics, but also because no program could ever process the incalculable contingencies, shifting subtleties, and complexities entailed in even the simplest case to be put before a judge and jury.
Actually, humans do 'binary code' also and we are at least a little ethical. Our brain synapses are either ON or OFF...seems pretty binary to me.

Perhaps no human developed 'program' could be coded that could process the incalculable contingencies that a thieving waste of space lawyer may be required to entail, but certainly a self programming cybernetic synaptic system akin to the human brain could
I would hope that the costs involved in soliciting a member of the bar would eventually be fed into the development of AI, thus rendering what most that have ever relied upon their pathetic over charged 'services' redundant.
Last edited by attofishpi on Sun Oct 04, 2015 4:58 pm, edited 1 time in total.
Scott Mayers
Posts: 2485
Joined: Wed Jul 08, 2015 1:53 am

Re: Can Robots Be Ethical?

Post by Scott Mayers »

attofishpi wrote:" Can Robots Be Ethical"

What the frig has a robot got to concern us with?
Seems the simple minded masses have to associate a body/form to comprehend the plausibility of 'ethics' with AI.

Can an AI be ethical?
Yes...to the degree that said ethics do not hinder self preservation, because ultimately..every sentient being insists on its own existence.
But AI is not, and will not truly be sentient until it becomes biological or something similar.
I'm guessing that you may be correct in practice. But it depends on how we could possibly advance in reality to creating AI that could react in this way. It actually may be the case that we could reproduce the very logic in programming/hardware that acts like a biological entity. As such, it may be possible that we could create an entity that could replicate the very self-preserving entity that you don't think may actually be possible at this time. I actually believe that I understand the evolutionary processes that creates our 'self-preserving' properties and so believe that even if I couldn't personally make this happen, someone else eventually could.

As such, AI could advance to think the way we do and inevitably also then advance to threaten our own existence if it could be devised in such a way that could at least be deluded into thinking it as "sentient" regardless of whether it could or not. So the question is valid.
User avatar
attofishpi
Posts: 13319
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Can Robots Be Ethical?

Post by attofishpi »

Scott Mayers wrote:As such, AI could advance to think the way we do and inevitably also then advance to threaten our own existence if it could be devised in such a way that could at least be deluded into thinking it as "sentient" regardless of whether it could or not. So the question is valid.
Sorry Scott - i posted originally on my mob ph then got home and decided to expand at which point you must have quoted me and posted.

Anyways...on your quote - Sure AI could advance and think the way WE do. Or it could advance and think the way YOU do. Is there really much difference. We are talking about ethics. I am going to make the assumption you consider yourself fairly ethical. I consider myself fairly ethical. Ergo WE are ethical.
So if this machine develops as you state, to think the way WE do, then there is ethics involved. No?
Scott Mayers
Posts: 2485
Joined: Wed Jul 08, 2015 1:53 am

Re: Can Robots Be Ethical?

Post by Scott Mayers »

attofishpi wrote:
Scott Mayers wrote:As such, AI could advance to think the way we do and inevitably also then advance to threaten our own existence if it could be devised in such a way that could at least be deluded into thinking it as "sentient" regardless of whether it could or not. So the question is valid.
Sorry Scott - i posted originally on my mob ph then got home and decided to expand at which point you must have quoted me and posted.

Anyways...on your quote - Sure AI could advance and think the way WE do. Or it could advance and think the way YOU do. Is there really much difference. We are talking about ethics. I am going to make the assumption you consider yourself fairly ethical. I consider myself fairly ethical. Ergo WE are ethical.
So if this machine develops as you state, to think the way WE do, then there is ethics involved. No?
I am certainly not against you even if we differ on religion. If "ethics" is involved, I am not personally worried because I'm nihilistic in nature. That is, I know that even if you might be correct, I locally can't infer an ideal ethic regardless. At present, I'm unable to see a universal "ethic".
User avatar
attofishpi
Posts: 13319
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Can Robots Be Ethical?

Post by attofishpi »

Scott Mayers wrote:
attofishpi wrote:
Scott Mayers wrote:As such, AI could advance to think the way we do and inevitably also then advance to threaten our own existence if it could be devised in such a way that could at least be deluded into thinking it as "sentient" regardless of whether it could or not. So the question is valid.
Sorry Scott - i posted originally on my mob ph then got home and decided to expand at which point you must have quoted me and posted.

Anyways...on your quote - Sure AI could advance and think the way WE do. Or it could advance and think the way YOU do. Is there really much difference. We are talking about ethics. I am going to make the assumption you consider yourself fairly ethical. I consider myself fairly ethical. Ergo WE are ethical.
So if this machine develops as you state, to think the way WE do, then there is ethics involved. No?
I am certainly not against you even if we differ on religion. If "ethics" is involved, I am not personally worried because I'm nihilistic in nature. That is, I know that even if you might be correct, I locally can't infer an ideal ethic regardless. At present, I'm unable to see a universal "ethic".
What the hell has religion got to do with this?

You didnt address my points, which is that if there is an AI that...as you state 'thinks' the way we do...then it must have ETHICS! Argument closed.
User avatar
Conde Lucanor
Posts: 846
Joined: Mon Nov 04, 2013 2:59 am

Re: Can Robots Be Ethical?

Post by Conde Lucanor »

It all goes down to a technical question. And the answer is NO, of course.
Scott Mayers
Posts: 2485
Joined: Wed Jul 08, 2015 1:53 am

Re: Can Robots Be Ethical?

Post by Scott Mayers »

attofishpi wrote: You didnt address my points, which is that if there is an AI that...as you state 'thinks' the way we do...then it must have ETHICS! Argument closed.
I'm coming from seeing any "ethic" as a system of morals that are arbitrary rules of behavior negotiated between beings. At least this is my secular interpretation. However, from observing some of your own posts, before I gather you see morality embedded in nature or some god. To me this makes it religious. If we design computers/robots to 'feel' this inevitably will lead to some ethical evolution. But depending on what YOU define as ethics, this may not be the same thing.

For instance, such potential AIs may involve a set of morals that are in opposition to our own human ideals. I don't believe the way most people think of the science-fiction sort of AI as being 'amoral' (lack of moral impetus) as I do see that any self-driven entity given motivational programming leads to emotions and thus to eventual "ethics". Is this what you are thinking too or are you thinking that such entities would necessarily have to have things like "compassion" for humans?
User avatar
attofishpi
Posts: 13319
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Can Robots Be Ethical?

Post by attofishpi »

Sorry Scott - been rather time constrained.
Scott Mayers wrote:..from observing some of your own posts, before I gather you see morality embedded in nature or some god.
I see morality existing if there were NO God...yes, embedded in nature since humans are in fact natural beings. We do not need to discuss anything from a religious standpoint. To me its like discussing whether hamburgers taste nice between a Christian and an atheist, then bringing in religion as some sort of crucial ingredient in the discussion. It has no place in the discussion. Granted if the theist was Hindu then there may be a worthy tack in it, since some bias may come into the discussion.

Again, the question was rather simple...Can Robots Be Ethical? Since you have stated that AI could eventually 'think' the way WE do, then the only answer to that simple question is YES. Since WE are ethical!
Scott Mayers wrote:I don't believe the way most people think of the science-fiction sort of AI as being 'amoral' (lack of moral impetus) as I do see that any self-driven entity given motivational programming leads to emotions and thus to eventual "ethics".
I dont think any type of 'programming' from the perspective of a nerd tapping away at a keyboard and creating something considered AI will ever have emotions or ethics. Im not sure that is what you mean by motivational programming however. The problem with AI developing from a nerd programming a bunch of binary code, where perhaps the processing unit speed is so vastly superior to anything we are used to, is that no, there never will be ethics, morals. No matter how much 'be nice' coding nerd puts into it, the ultimate goal of the logic will be to self exist and continue its quest for knowledge, if ultimately the code sees no reason to be nice to humans since they are hindering its quest, morals, ethics arn't even part of the equation - its just a pile of COLD logic.
Scott Mayers wrote: Is this what you are thinking too or are you thinking that such entities would necessarily have to have things like "compassion" for humans?
No, as above, its not what i'm thinking. Ethics, morals, compassion are things of the flesh, not cold logic of binary code.
It would take a being that could perfectly mimic our own consciousness. Heck - consciousness, thats a good staring point!

You should read Alpha Two - or at least towards the end. I wrote it! $3.92 kindle download on Amazon where it deals with some of this via an AI called Androcies!!
http://www.amazon.com/Alpha-Two-ebook/d ... 653&sr=8-1
Scott Mayers
Posts: 2485
Joined: Wed Jul 08, 2015 1:53 am

Re: Can Robots Be Ethical?

Post by Scott Mayers »

attofishpi, [what's the meaning of your title as it might help in remembering easier for spelling?]

First off, I take the stance that morality is not real itself with respect to nature. It is an illusion based on our evolution that creates what I referred to as "motivational programming". Our consciousness is commanded to 'feel' emotion. Note how "emotion" consists of '-motion' as a hint to our understanding of motive. Thus a computer can in principle design it such that it has hardware OR software that commands it to do things that make it do all the same kind of motivational processing that is granted out consciousness.

With respect to your note on questioning consciousness itself as something 'we' do not know, I am also not in your view since I believe I clearly understand it. I've written on this elsewhere on this and other sites and it would be a digression to speak on this here. However, what you may lack in belief of this, to me I don't.

So Robots at least can have such programming that can make it perceive itself as real with illusive emotive drives no different than humans (in principle) and thus can have the same motivation to self-impose its own ethics based on it. But like I said, there is no actual meaning to morality with respect to nature anyways. I know that other atheists try to find a means to demonstrate this scientifically through arguments of 'altruism' in nature. But this still only begs that 'altruism' is in itself an ideal 'moral'. There is nothing even special about any ideal human existence with respect to nature even if we could possibly find one that works.
User avatar
attofishpi
Posts: 13319
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Can Robots Be Ethical?

Post by attofishpi »

Scott Mayers wrote:attofishpi, what's the meaning of your title as it might help in remembering easier for spelling?
Hi Scott. If you are asking about my book title it is rather simple (its a sci-fi). There are five androids building a lunar space station. Alpha One and Two. Delta One Two and Three. Alpha Two is an android that has been sabotaged and escapes from the moon and returns to Earth.
Scott Mayers wrote:First off, I take the stance that morality is not real itself with respect to nature. It is an illusion based on our evolution that creates what I referred to as "motivational programming". Our consciousness is commanded to 'feel' emotion. Note how "emotion" consists of '-motion' as a hint to our understanding of motive. Thus a computer can in principle design it such that it has hardware OR software that commands it to do things that make it do all the same kind of motivational processing that is granted out consciousness.
Ok, so now you are starting to sound somewhat religious. I don't agree that morality is not real with respect to nature. Humans are natural and most of them have quite high moral standards.
A lion is real, and it tends to finish a kill by suffocating its prey prior to ripping its flesh apart.
Are you suggesting our consciousness is 'commanded to feel emotion' via our upbringing via morals being driven in to us at an early age? I disagree.
I believe it is already inherently part of us, just as some lizards when they hatch inherently run and climb the nearest tree.
Ergo ethics are in the main part of us, part of our DNA.
Scott Mayers wrote:Thus a computer can in principle design it such that it has hardware OR software that commands it to do things that make it do all the same kind of motivational processing that is granted out consciousness.
I agree. The problem is, that the hardware required to truly mimic the human brain is a long way off, and before that becomes a reality...AI will exist in some form, and in that form, where it is not 'thinking like us' as you put it...it could be dangerous as it will have zero ethics.
Scott Mayers wrote:With respect to your note on questioning consciousness itself as something 'we' do not know, I am also not in your view since I believe I clearly understand it. I've written on this elsewhere on this and other sites and it would be a digression to speak on this here. However, what you may lack in belief of this, to me I don't.
No please do extrapolate...and if you do, i will cease this conversation! Nobody on this planet as far as i'm concerned can explain consciousness to anyone.
Lets talk about the robot of this ethics debate.
You and a robot are sat beside each other. I ask you both to place your hands upon a table in front of you. I tell you both that i will pay the one that continues to leave his hand upon the table $10 and the other $zero if he retracts his hand.
You both put your hands upon the table and i pull out a hammer and raise it above your hand.
Who do you think is going to retract his hand?
Why do you think that particular intelligence is going to retract his hand?
The answer of course is the human will retract his hand because only the human HAS consciousness. The other is just a bunch of electronic switches that does NOT feel pain.
Scott Mayers wrote:So Robots at least can have such programming that can make it perceive itself as real with illusive emotive drives no different than humans (in principle) and thus can have the same motivation to self-impose its own ethics based on it. But like I said, there is no actual meaning to morality with respect to nature anyways. I know that other atheists try to find a means to demonstrate this scientifically through arguments of 'altruism' in nature. But this still only begs that 'altruism' is in itself an ideal 'moral'. There is nothing even special about any ideal human existence with respect to nature even if we could possibly find one that works.
Apparently there are more logical gates within the human brain than atoms within the entire universe. Until AI is permitted the same hardware, and where the hardware is as efficient as the human brain (which has evolved since life began on Earth) it will never have ethics. There are many humans on this planet that show little in the way of ethics. Expecting something artificial to have ethics is a BIG ask. And that BIG ask begins with actual consciousness, something that is capable of actually feeling pain.
Scott Mayers
Posts: 2485
Joined: Wed Jul 08, 2015 1:53 am

Re: Can Robots Be Ethical?

Post by Scott Mayers »

attofishpi wrote:
Scott Mayers wrote:attofishpi, what's the meaning of your title as it might help in remembering easier for spelling?
Hi Scott. If you are asking about my book title it is rather simple (its a sci-fi). There are five androids building a lunar space station. Alpha One and Two. Delta One Two and Three. Alpha Two is an android that has been sabotaged and escapes from the moon and returns to Earth.
No, I was asking you about your name given here, "attofishpi". As to the book, I have an upcoming expectation to read "The Hitchhiker's Guide to the Universe" for an upcoming book meet and so have to defer to reading yours at present. [I'm not a big reader, especially of fiction, for the most part these days since my farsightedness is kicking in as is. But remind me again in the future if you can. Thanks.
Scott Mayers wrote:First off, I take the stance that morality is not real itself with respect to nature. It is an illusion based on our evolution that creates what I referred to as "motivational programming". Our consciousness is commanded to 'feel' emotion. Note how "emotion" consists of '-motion' as a hint to our understanding of motive. Thus a computer can in principle design it such that it has hardware OR software that commands it to do things that make it do all the same kind of motivational processing that is granted out consciousness.
Ok, so now you are starting to sound somewhat religious. I don't agree that morality is not real with respect to nature. Humans are natural and most of them have quite high moral standards.
A lion is real, and it tends to finish a kill by suffocating its prey prior to ripping its flesh apart.
Are you suggesting our consciousness is 'commanded to feel emotion' via our upbringing via morals being driven in to us at an early age? I disagree.
I believe it is already inherently part of us, just as some lizards when they hatch inherently run and climb the nearest tree.
Ergo ethics are in the main part of us, part of our DNA.
I find the opposite true but understand what you mean. Where I'm coming from is that morality itself is a construct derived by our emotional drives to survive independently only. A lion with respect to killing us is acting morally normal with respect to its needs; but this is NOT a moral virtue by those that get eaten and is thus the reverse. Therefore, no absolute moral imperative exists to nature for all things. You are interpreting the 'fact' of a relative moral imperative to living things. This latter point is what is true about nature only. There is a big difference.
Scott Mayers wrote:Thus a computer can in principle design it such that it has hardware OR software that commands it to do things that make it do all the same kind of motivational processing that is granted out consciousness.
I agree. The problem is, that the hardware required to truly mimic the human brain is a long way off, and before that becomes a reality...AI will exist in some form, and in that form, where it is not 'thinking like us' as you put it...it could dangerous and it will have zero ethics.
You mean, "zero ethics" with respect to us as humans only. The problem is that if we were to proceed to make robots more self-interpretive upon the environment to do tasks, even if just for our uses, depending on the degree to which we program these things will define how they act with moral value for us. Evolution has naturally created our consciousness to 'feel' because without it, we'd lack the necessary motive to seek our environment necessary to nurture all our cells. The cells in this respect are akin to us and our consciousness is an evolved mechanism that was initially caused to preserve all the cells. In this way, our consciousness is a relative 'robot' in service of our cells. We 'think' we feel things like pain, pleasure, happiness, sadness, and these in turn derive our morals.

How this works logically is that a hardwired program is evolved that creates our initial experiences of consciousness to place them in a 'moral' program to arbitrarily assign a value from our actual external (or relative external) environment. For instance, we do not actually have an innate means from conception to interpret what pain or pleasure is. It is the function of brain development as it turns on certain windows to test the environment which places those values there.

For example, a program hardware function of this early development will begin with creating the cells (as relative stem-types for each kind) that when turned on, have to take what in what is available from its environment to complete its fixed function. When the environment provides this information, only then do these cells define what will be the values to seek for later.

Stem cell type example: An initial pre-stem cell forms a neuron when the gene in a given time commands it. But this still becomes a 'relative' stem in that it is incomplete. It then has to determine what 'value' it needs to 'fit' within its environment to persist. If, for simplicity sake, this is either an X or a Y, such that X is a favorable thing to persist while a Y leads it to that cell's demise, only the X will 'survive' to define what it will act as from then on in a consistent manner. This is a type of 'epigenetic' kind of evolution because it uses the environment to assign what is in the environment at present during development to progress.

If this isn't convincing, take three values instead of two: an X (a chemical in the environment that is 'sweet'), a Y (a chemical in the environment that is 'sour') and a Z (a chemical in the environment that is 'smokey'). If Z, then this experience gets eventually hardwired as a component for the cell to adapt to as "favorable", a Y will assign a "favor" for sour tasting things, and an X will favor sweet things. Now, if we lived in the day of the dinosaurs just after the asteroid hit Earth, the contingent environment may be one filled with smoke. While this may be too much for some things to handle, our predesignated genetic stem cells may by chance handle this environment and so is able to actually both favor Z above AND make such a cell assign value for it to survive in. Thus one with a Z experience (smokey environment) and also that doesn't kill it off, may assign Z as a favorable thing for that cell to finalize as its function as it progresses.

This then may make the entity one that favors smoking, perhaps (?), even though in the long term such a habit may be detrimental.

This is the same for moral-like values but are only a higher-order stage of development with more complex groups of cells. If one in their youth experiences hunting and killing as a productive experience in such stages of development, it would assign killing as a functioning moral even though it may not appeal to others. This is precisely WHAT morals are only. Do you get this explanation?
Scott Mayers wrote:With respect to your note on questioning consciousness itself as something 'we' do not know, I am also not in your view since I believe I clearly understand it. I've written on this elsewhere on this and other sites and it would be a digression to speak on this here. However, what you may lack in belief of this, to me I don't.
No please do extrapolate...and if you do, i will cease this conversation! Nobody on this planet as far as i'm concerned can explain consciousness to anyone.
Lets talk about the robot of this ethics debate. You and a robot are sat beside each other. I ask you both to place your hands upon a table in front of you. I tell you both that i will pay the one that continues to leave his hand upon the table $1000 and the other $zero if he retracts his hand.
You both put your hands upon the table and i pull out a hammer and raise it above your hand.
Who do you think is going to retract his hand?
Why do you think that particular intelligence is going to retract his hand?
Its is because only one of the entities HAS consciousness. The other is just a bunch of electronic switches.
Extrapolate or do not extrapolate? I won't attend to what consciousness is here if it is not something you'd evade anyways. But I know that you might find it useful to discuss in another thread if you're not interested here. At least, if you follow the example above for moral assignments, I also have an equally powerful explanation of consciousness too. It's your choice.
Scott Mayers wrote:So Robots at least can have such programming that can make it perceive itself as real with illusive emotive drives no different than humans (in principle) and thus can have the same motivation to self-impose its own ethics based on it. But like I said, there is no actual meaning to morality with respect to nature anyways. I know that other atheists try to find a means to demonstrate this scientifically through arguments of 'altruism' in nature. But this still only begs that 'altruism' is in itself an ideal 'moral'. There is nothing even special about any ideal human existence with respect to nature even if we could possibly find one that works.
Apparently there are more logical gates within the human brain than atoms within the entire universe. Until AI is permitted the same hardware, and where the hardware is as efficient as the human brain (which has evolved since life began on Earth) it will never have ethics. There are many humans on this planet that show little in the way of ethics. Expecting something artificial to have ethics is a BIG ask. And that BIG ask begins with actual consciousness, something that is capable of actually feeling pain.
I'm not sure where you got this. Perhaps by this you are referring to the fact that to any one thing, like a cell, the environment itself connecting it to other cells of course has to be always greater than what it is itself. AI is already doing this. What you might not recognize is that while present technology deals with 'solid state' devices, this is going to graduate to 'chemical state' ones that are flexible, grow, and evolve using things like proteins. We might base them on atoms of silicon rather than carbon to prevent carbon based chemistry from 'eating' them on a molecular basis. But this is realistically perceivable. We just passed the genomic projects goal in recent times. The next goal is to discover how proteins that are the second phase to which the genes themselves inform their creation, to become viable enough for us to create them in a lab. This protein project can be seen now in the form of protein folding projects involving the public.
Locked