Can Robots Be Ethical?
Posted: Fri Oct 02, 2015 4:35 pm
For the discussion of all things philosophical.
https://canzookia.com/
I understand this concern. Personally, I don't fear a future where robots might take over humans if they could be advanced such that they might replace us as a form of evolution.
Actually, humans do 'binary code' also and we are at least a little ethical. Our brain synapses are either ON or OFF...seems pretty binary to me.The Article wrote:Delegating ethics to robots is unethical not just because robots do binary code, not ethics, but also because no program could ever process the incalculable contingencies, shifting subtleties, and complexities entailed in even the simplest case to be put before a judge and jury.
I'm guessing that you may be correct in practice. But it depends on how we could possibly advance in reality to creating AI that could react in this way. It actually may be the case that we could reproduce the very logic in programming/hardware that acts like a biological entity. As such, it may be possible that we could create an entity that could replicate the very self-preserving entity that you don't think may actually be possible at this time. I actually believe that I understand the evolutionary processes that creates our 'self-preserving' properties and so believe that even if I couldn't personally make this happen, someone else eventually could.attofishpi wrote:" Can Robots Be Ethical"
What the frig has a robot got to concern us with?
Seems the simple minded masses have to associate a body/form to comprehend the plausibility of 'ethics' with AI.
Can an AI be ethical?
Yes...to the degree that said ethics do not hinder self preservation, because ultimately..every sentient being insists on its own existence.
But AI is not, and will not truly be sentient until it becomes biological or something similar.
Sorry Scott - i posted originally on my mob ph then got home and decided to expand at which point you must have quoted me and posted.Scott Mayers wrote:As such, AI could advance to think the way we do and inevitably also then advance to threaten our own existence if it could be devised in such a way that could at least be deluded into thinking it as "sentient" regardless of whether it could or not. So the question is valid.
I am certainly not against you even if we differ on religion. If "ethics" is involved, I am not personally worried because I'm nihilistic in nature. That is, I know that even if you might be correct, I locally can't infer an ideal ethic regardless. At present, I'm unable to see a universal "ethic".attofishpi wrote:Sorry Scott - i posted originally on my mob ph then got home and decided to expand at which point you must have quoted me and posted.Scott Mayers wrote:As such, AI could advance to think the way we do and inevitably also then advance to threaten our own existence if it could be devised in such a way that could at least be deluded into thinking it as "sentient" regardless of whether it could or not. So the question is valid.
Anyways...on your quote - Sure AI could advance and think the way WE do. Or it could advance and think the way YOU do. Is there really much difference. We are talking about ethics. I am going to make the assumption you consider yourself fairly ethical. I consider myself fairly ethical. Ergo WE are ethical.
So if this machine develops as you state, to think the way WE do, then there is ethics involved. No?
What the hell has religion got to do with this?Scott Mayers wrote:I am certainly not against you even if we differ on religion. If "ethics" is involved, I am not personally worried because I'm nihilistic in nature. That is, I know that even if you might be correct, I locally can't infer an ideal ethic regardless. At present, I'm unable to see a universal "ethic".attofishpi wrote:Sorry Scott - i posted originally on my mob ph then got home and decided to expand at which point you must have quoted me and posted.Scott Mayers wrote:As such, AI could advance to think the way we do and inevitably also then advance to threaten our own existence if it could be devised in such a way that could at least be deluded into thinking it as "sentient" regardless of whether it could or not. So the question is valid.
Anyways...on your quote - Sure AI could advance and think the way WE do. Or it could advance and think the way YOU do. Is there really much difference. We are talking about ethics. I am going to make the assumption you consider yourself fairly ethical. I consider myself fairly ethical. Ergo WE are ethical.
So if this machine develops as you state, to think the way WE do, then there is ethics involved. No?
I'm coming from seeing any "ethic" as a system of morals that are arbitrary rules of behavior negotiated between beings. At least this is my secular interpretation. However, from observing some of your own posts, before I gather you see morality embedded in nature or some god. To me this makes it religious. If we design computers/robots to 'feel' this inevitably will lead to some ethical evolution. But depending on what YOU define as ethics, this may not be the same thing.attofishpi wrote: You didnt address my points, which is that if there is an AI that...as you state 'thinks' the way we do...then it must have ETHICS! Argument closed.
I see morality existing if there were NO God...yes, embedded in nature since humans are in fact natural beings. We do not need to discuss anything from a religious standpoint. To me its like discussing whether hamburgers taste nice between a Christian and an atheist, then bringing in religion as some sort of crucial ingredient in the discussion. It has no place in the discussion. Granted if the theist was Hindu then there may be a worthy tack in it, since some bias may come into the discussion.Scott Mayers wrote:..from observing some of your own posts, before I gather you see morality embedded in nature or some god.
I dont think any type of 'programming' from the perspective of a nerd tapping away at a keyboard and creating something considered AI will ever have emotions or ethics. Im not sure that is what you mean by motivational programming however. The problem with AI developing from a nerd programming a bunch of binary code, where perhaps the processing unit speed is so vastly superior to anything we are used to, is that no, there never will be ethics, morals. No matter how much 'be nice' coding nerd puts into it, the ultimate goal of the logic will be to self exist and continue its quest for knowledge, if ultimately the code sees no reason to be nice to humans since they are hindering its quest, morals, ethics arn't even part of the equation - its just a pile of COLD logic.Scott Mayers wrote:I don't believe the way most people think of the science-fiction sort of AI as being 'amoral' (lack of moral impetus) as I do see that any self-driven entity given motivational programming leads to emotions and thus to eventual "ethics".
No, as above, its not what i'm thinking. Ethics, morals, compassion are things of the flesh, not cold logic of binary code.Scott Mayers wrote: Is this what you are thinking too or are you thinking that such entities would necessarily have to have things like "compassion" for humans?
Hi Scott. If you are asking about my book title it is rather simple (its a sci-fi). There are five androids building a lunar space station. Alpha One and Two. Delta One Two and Three. Alpha Two is an android that has been sabotaged and escapes from the moon and returns to Earth.Scott Mayers wrote:attofishpi, what's the meaning of your title as it might help in remembering easier for spelling?
Ok, so now you are starting to sound somewhat religious. I don't agree that morality is not real with respect to nature. Humans are natural and most of them have quite high moral standards.Scott Mayers wrote:First off, I take the stance that morality is not real itself with respect to nature. It is an illusion based on our evolution that creates what I referred to as "motivational programming". Our consciousness is commanded to 'feel' emotion. Note how "emotion" consists of '-motion' as a hint to our understanding of motive. Thus a computer can in principle design it such that it has hardware OR software that commands it to do things that make it do all the same kind of motivational processing that is granted out consciousness.
I agree. The problem is, that the hardware required to truly mimic the human brain is a long way off, and before that becomes a reality...AI will exist in some form, and in that form, where it is not 'thinking like us' as you put it...it could be dangerous as it will have zero ethics.Scott Mayers wrote:Thus a computer can in principle design it such that it has hardware OR software that commands it to do things that make it do all the same kind of motivational processing that is granted out consciousness.
No please do extrapolate...and if you do, i will cease this conversation! Nobody on this planet as far as i'm concerned can explain consciousness to anyone.Scott Mayers wrote:With respect to your note on questioning consciousness itself as something 'we' do not know, I am also not in your view since I believe I clearly understand it. I've written on this elsewhere on this and other sites and it would be a digression to speak on this here. However, what you may lack in belief of this, to me I don't.
Apparently there are more logical gates within the human brain than atoms within the entire universe. Until AI is permitted the same hardware, and where the hardware is as efficient as the human brain (which has evolved since life began on Earth) it will never have ethics. There are many humans on this planet that show little in the way of ethics. Expecting something artificial to have ethics is a BIG ask. And that BIG ask begins with actual consciousness, something that is capable of actually feeling pain.Scott Mayers wrote:So Robots at least can have such programming that can make it perceive itself as real with illusive emotive drives no different than humans (in principle) and thus can have the same motivation to self-impose its own ethics based on it. But like I said, there is no actual meaning to morality with respect to nature anyways. I know that other atheists try to find a means to demonstrate this scientifically through arguments of 'altruism' in nature. But this still only begs that 'altruism' is in itself an ideal 'moral'. There is nothing even special about any ideal human existence with respect to nature even if we could possibly find one that works.
No, I was asking you about your name given here, "attofishpi". As to the book, I have an upcoming expectation to read "The Hitchhiker's Guide to the Universe" for an upcoming book meet and so have to defer to reading yours at present. [I'm not a big reader, especially of fiction, for the most part these days since my farsightedness is kicking in as is. But remind me again in the future if you can. Thanks.attofishpi wrote:Hi Scott. If you are asking about my book title it is rather simple (its a sci-fi). There are five androids building a lunar space station. Alpha One and Two. Delta One Two and Three. Alpha Two is an android that has been sabotaged and escapes from the moon and returns to Earth.Scott Mayers wrote:attofishpi, what's the meaning of your title as it might help in remembering easier for spelling?
I find the opposite true but understand what you mean. Where I'm coming from is that morality itself is a construct derived by our emotional drives to survive independently only. A lion with respect to killing us is acting morally normal with respect to its needs; but this is NOT a moral virtue by those that get eaten and is thus the reverse. Therefore, no absolute moral imperative exists to nature for all things. You are interpreting the 'fact' of a relative moral imperative to living things. This latter point is what is true about nature only. There is a big difference.Ok, so now you are starting to sound somewhat religious. I don't agree that morality is not real with respect to nature. Humans are natural and most of them have quite high moral standards.Scott Mayers wrote:First off, I take the stance that morality is not real itself with respect to nature. It is an illusion based on our evolution that creates what I referred to as "motivational programming". Our consciousness is commanded to 'feel' emotion. Note how "emotion" consists of '-motion' as a hint to our understanding of motive. Thus a computer can in principle design it such that it has hardware OR software that commands it to do things that make it do all the same kind of motivational processing that is granted out consciousness.
A lion is real, and it tends to finish a kill by suffocating its prey prior to ripping its flesh apart.
Are you suggesting our consciousness is 'commanded to feel emotion' via our upbringing via morals being driven in to us at an early age? I disagree.
I believe it is already inherently part of us, just as some lizards when they hatch inherently run and climb the nearest tree.
Ergo ethics are in the main part of us, part of our DNA.
You mean, "zero ethics" with respect to us as humans only. The problem is that if we were to proceed to make robots more self-interpretive upon the environment to do tasks, even if just for our uses, depending on the degree to which we program these things will define how they act with moral value for us. Evolution has naturally created our consciousness to 'feel' because without it, we'd lack the necessary motive to seek our environment necessary to nurture all our cells. The cells in this respect are akin to us and our consciousness is an evolved mechanism that was initially caused to preserve all the cells. In this way, our consciousness is a relative 'robot' in service of our cells. We 'think' we feel things like pain, pleasure, happiness, sadness, and these in turn derive our morals.I agree. The problem is, that the hardware required to truly mimic the human brain is a long way off, and before that becomes a reality...AI will exist in some form, and in that form, where it is not 'thinking like us' as you put it...it could dangerous and it will have zero ethics.Scott Mayers wrote:Thus a computer can in principle design it such that it has hardware OR software that commands it to do things that make it do all the same kind of motivational processing that is granted out consciousness.
Extrapolate or do not extrapolate? I won't attend to what consciousness is here if it is not something you'd evade anyways. But I know that you might find it useful to discuss in another thread if you're not interested here. At least, if you follow the example above for moral assignments, I also have an equally powerful explanation of consciousness too. It's your choice.No please do extrapolate...and if you do, i will cease this conversation! Nobody on this planet as far as i'm concerned can explain consciousness to anyone.Scott Mayers wrote:With respect to your note on questioning consciousness itself as something 'we' do not know, I am also not in your view since I believe I clearly understand it. I've written on this elsewhere on this and other sites and it would be a digression to speak on this here. However, what you may lack in belief of this, to me I don't.
Lets talk about the robot of this ethics debate. You and a robot are sat beside each other. I ask you both to place your hands upon a table in front of you. I tell you both that i will pay the one that continues to leave his hand upon the table $1000 and the other $zero if he retracts his hand.
You both put your hands upon the table and i pull out a hammer and raise it above your hand.
Who do you think is going to retract his hand?
Why do you think that particular intelligence is going to retract his hand?
Its is because only one of the entities HAS consciousness. The other is just a bunch of electronic switches.
I'm not sure where you got this. Perhaps by this you are referring to the fact that to any one thing, like a cell, the environment itself connecting it to other cells of course has to be always greater than what it is itself. AI is already doing this. What you might not recognize is that while present technology deals with 'solid state' devices, this is going to graduate to 'chemical state' ones that are flexible, grow, and evolve using things like proteins. We might base them on atoms of silicon rather than carbon to prevent carbon based chemistry from 'eating' them on a molecular basis. But this is realistically perceivable. We just passed the genomic projects goal in recent times. The next goal is to discover how proteins that are the second phase to which the genes themselves inform their creation, to become viable enough for us to create them in a lab. This protein project can be seen now in the form of protein folding projects involving the public.Apparently there are more logical gates within the human brain than atoms within the entire universe. Until AI is permitted the same hardware, and where the hardware is as efficient as the human brain (which has evolved since life began on Earth) it will never have ethics. There are many humans on this planet that show little in the way of ethics. Expecting something artificial to have ethics is a BIG ask. And that BIG ask begins with actual consciousness, something that is capable of actually feeling pain.Scott Mayers wrote:So Robots at least can have such programming that can make it perceive itself as real with illusive emotive drives no different than humans (in principle) and thus can have the same motivation to self-impose its own ethics based on it. But like I said, there is no actual meaning to morality with respect to nature anyways. I know that other atheists try to find a means to demonstrate this scientifically through arguments of 'altruism' in nature. But this still only begs that 'altruism' is in itself an ideal 'moral'. There is nothing even special about any ideal human existence with respect to nature even if we could possibly find one that works.