Can Robots Be Ethical?

Discussion of articles that appear in the magazine.

Moderators: AMod, iMod

User avatar
attofishpi
Posts: 13319
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Can Robots Be Ethical?

Post by attofishpi »

Scott Mayers wrote:No, I was asking you about your name given here, "attofishpi". As to the book, I have an upcoming expectation to read "The Hitchhiker's Guide to the Universe" for an upcoming book meet and so have to defer to reading yours at present. [I'm not a big reader, especially of fiction, for the most part these days since my farsightedness is kicking in as is. But remind me again in the future if you can. Thanks.
Ok. attofishpi stems from a really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really small fish pie. (i'm pisces - a fish swimming in a sea of light 'waves' ergo pi)
Scott Mayers wrote:I find the opposite true but understand what you mean. Where I'm coming from is that morality itself is a construct derived by our emotional drives to survive independently only. A lion with respect to killing us is acting morally normal with respect to its needs; but this is NOT a moral virtue by those that get eaten and is thus the reverse. Therefore, no absolute moral imperative exists to nature for all things. You are interpreting the 'fact' of a relative moral imperative to living things. This latter point is what is true about nature only. There is a big difference.
I think morals stem to a certain extent from the entity in question attaining a level of empathy.
Scott Mayers wrote:
attofishpi wrote:I agree. The problem is, that the hardware required to truly mimic the human brain is a long way off, and before that becomes a reality...AI will exist in some form, and in that form, where it is not 'thinking like us' as you put it...it could dangerous and it will have zero ethics.
You mean, "zero ethics" with respect to us as humans only.
I mean zero ethics, as in NO ethics, bar what some nerd programmer thought he was placing within the algorithm as some safeguard.
You still are not grasping what i am attempting to convey with respect to programmed cold logic in comparison to actual consciousness. Lets say i write a computer program that controls a crane with no human input. Lets say i code it to perfectly measure the weight of the mass being picked up, and the correct counter balance ready for when the crane picks up the mass and swivels around to drop the load at its destination. This works perfectly for 3 days, then on the fourth day there is heavy wind. Suddenly on day four the crane, having not been programmed to adjust for strong wind, drops the load on poor old Fred, and Fred dies. The crane program had zero ethics from the time it began running...because its a program - cold logic. Nobody in their right mind would have ever suggested the crane program should have ethics!
Cold logic programmed AI is just the same. It is SIMULATED intelligence. It will never empathise, it will never FEEL pain, its nothing but CPU cold logic.
As i stated AI will exist in that form - simulated intelligence - totally unconscious (zero ethics), until it becomes something akin to biological - and ONLY at that point.... the point where it CAN FEEL the PAIN of a hammer hitting it on the hand, feel the warmth of the sun on its face and appreciate these things will it ever be truly conscious and have gained the ability to BE ethical (not just simulate it).
Scott Mayers wrote:
attofishpi wrote:Lets talk about the robot of this ethics debate. You and a robot are sat beside each other. I ask you both to place your hands upon a table in front of you. I tell you both that i will pay the one that continues to leave his hand upon the table $1000 and the other $zero if he retracts his hand.
You both put your hands upon the table and i pull out a hammer and raise it above your hand.
Who do you think is going to retract his hand?
Why do you think that particular intelligence is going to retract his hand?
Its is because only one of the entities HAS consciousness. The other is just a bunch of electronic switches.
Extrapolate or do not extrapolate? I won't attend to what consciousness is here if it is not something you'd evade anyways. But I know that you might find it useful to discuss in another thread if you're not interested here. At least, if you follow the example above for moral assignments, I also have an equally powerful explanation of consciousness too. It's your choice.
Unless you agree that to be conscious requires the ability to comprehend the senses in the same manner that sentient mammals do, then no.
Scott Mayers wrote:
attofishpi wrote:Apparently there are more logical gates within the human brain than atoms within the entire universe. Until AI is permitted the same hardware, and where the hardware is as efficient as the human brain (which has evolved since life began on Earth) it will never have ethics. There are many humans on this planet that show little in the way of ethics. Expecting something artificial to have ethics is a BIG ask. And that BIG ask begins with actual consciousness, something that is capable of actually feeling pain.
I'm not sure where you got this. Perhaps by this you are referring to the fact that to any one thing, like a cell, the environment itself connecting it to other cells of course has to be always greater than what it is itself. AI is already doing this. What you might not recognize is that while present technology deals with 'solid state' devices, this is going to graduate to 'chemical state' ones that are flexible, grow, and evolve using things like proteins. We might base them on atoms of silicon rather than carbon to prevent carbon based chemistry from 'eating' them on a molecular basis. But this is realistically perceivable. We just passed the genomic projects goal in recent times. The next goal is to discover how proteins that are the second phase to which the genes themselves inform their creation, to become viable enough for us to create them in a lab. This protein project can be seen now in the form of protein folding projects involving the public.
Sounds dandy.
Not sure whether i read of saw a doco re there being more logical gates in the human brain than atoms in the universe. Big call tho eh? It comes down to the number of combinations at a binary synapse level with the huge average number of connections each synapse actually has.
Scott Mayers
Posts: 2485
Joined: Wed Jul 08, 2015 1:53 am

Re: Can Robots Be Ethical?

Post by Scott Mayers »

attofishpi,

Thanks for your name explanation. I can easily spell it now without thought by it as it is handy when you begin a reply but have to keep back paging to make sure you spelled the name correct or not. So I guess you are a very, very small 'spermatozoon', right?

As to consciousness I'm still not certain your distinction as I agree to your condition but don't see the difference you're thinking of. I don't think of consciousness as a mysterious external soul, it is dependent upon the actual chemistry, but is a function of the effect of sensing things in space at multiple points simultaneously. This contradicts everything we observe except for our direct experience of it alone as independent beings. I explain this phenomena with respect to how quantum mechanics thinks of superposition but extended to way more numbers of events in space. It may seem weird at first but I explain this in logical terms. It is one of the major reasons I try to explain the need to understand universals as 'real' essences in their described members of such a set defined [from the original argument began on Platonic Forms]. It's not mystical if this is what you might be thinking.
I mean zero ethics, as in NO ethics, bar what some nerd programmer thought he was placing within the algorithm as some safeguard.
You still are not grasping what i am attempting to convey with respect to programmed cold logic in comparison to actual consciousness. Lets say i write a computer program that controls a crane with no human input. Lets say i code it to perfectly measure the weight of the mass being picked up, and the correct counter balance ready for when the crane picks up the mass and swivels around to drop the load at its destination. This works perfectly for 3 days, then on the fourth day there is heavy wind. Suddenly on day four the crane, having not been programmed to adjust for strong wind, drops the load on poor old Fred, and Fred dies. The crane program had zero ethics from the time it began running...because its a program - cold logic. Nobody in their right mind would have ever suggested the crane program should have ethics!
Cold logic programmed AI is just the same. It is SIMULATED intelligence. It will never empathise, it will never FEEL pain, its nothing but CPU cold logic.
As i stated AI will exist in that form - simulated intelligence - totally unconscious (zero ethics), until it becomes something akin to biological - and ONLY at that point.... the point where it CAN FEEL the PAIN of a hammer hitting it on the hand, feel the warmth of the sun on its face and appreciate these things will it ever be truly conscious and have gained the ability to BE ethical (not just simulate it).
I see that you are flexible in understanding but appear doubtful as I'm guessing that you aren't familiar completely with the differences of the biological logic in contrast to computer logic. It is why I pointed out a future necessity to use protein like nano-tech-mechanisms if we are to exactly duplicate our brains. We agree that to 'feel' also infers 'ethics' unlike how Data, from Star Trek or the original Spock, is presumed to lack these in their deducto-logical constructs. For a Data character, particularly, I don't believe is a realistic possibility without a default to emotion. Emotion that leads to morals is a coexisting biproduct of 'feeling' no matter what. It is only about degrees, especially of complexity.

Neurons act as both an inductive mechanism and a deductive one much different that our general computer logic. This is first because each chemical acts as distinct values and so are multivariable in contrast to the simple binary we use in most computing at present. It uses induction in that each neuron is semi-defined to accept signals in the form of neurotransmitters which require a sufficient number of Pro-charged ones over Negative-charged ones that require meeting a 'breakpoint' to set off a charge. It also treats this through time by how often (its frequency) and in which way such pulses create a sensation to us. How we 'feel' is the variant 'colors' of the multitude of possible charge events in these ways. But the ones that are 'empathetic' in frequency and simultaneity between multiple neurons is what makes us conscious as a whole.

Other factors that differ are that for our present solid state computers presently working on AI for this requires multiple processors to represent each neuron independently with a fixed multitude of connections in a complex matrix to simulate the possibilities where neurons usually grow or die. This is why we need a protein or nano-tech approach to mimic chemical logic that includes the fluid (non-solid-state) factors of nature. Another difference is that while our computer processors act as complete formal logics with billions of gates, the opposite is true of neurons. Each neuron acts as only one gate but can be relatively different kinds depending upon their matrix connection with respect to other neurons.

However, these are understood and can technically be done in software, but would be 'slower' and so would act like super [place all your 'really's above here] slow sloths in our present computers this way to model us without the use of a matrix of supercomputers to do this.
User avatar
attofishpi
Posts: 13319
Joined: Tue Aug 16, 2011 8:10 am
Location: Orion Spur
Contact:

Re: Can Robots Be Ethical?

Post by attofishpi »

Scott Mayers wrote:attofishpi,
Thanks for your name explanation. I can easily spell it now without thought by it as it is handy when you begin a reply but have to keep back paging to make sure you spelled the name correct or not. So I guess you are a very, very small 'spermatozoon', right?
A conversing spermatozoon is exactly what i am...you just wait until i develop, then i can do away with hopping up and down vigourously on this keyboard while the owner is out.
Apologies for overlooking the hitch hikers guide to the universe quip btw.
Scott Mayers wrote: I don't think of consciousness as a mysterious external soul
Nor do i.
Scott Mayers wrote:it is dependent upon the actual chemistry, but is a function of the effect of sensing things in space at multiple points simultaneously. This contradicts everything we observe except for our direct experience of it alone as independent beings. I explain this phenomena with respect to how quantum mechanics thinks of superposition but extended to way more numbers of events in space. It may seem weird at first but I explain this in logical terms. It is one of the major reasons I try to explain the need to understand universals as 'real' essences in their described members of such a set defined [from the original argument began on Platonic Forms]. It's not mystical if this is what you might be thinking.
Sure its far easier to explain how a star fish might react to touch by way of its biological chemistry than it is to explain how an actual recipient is able to comprehend such a sensation. I guess this is where you are bringing to the table the term 'soul'.
Scott Mayers wrote:Isee that you are flexible in understanding but appear doubtful as I'm guessing that you aren't familiar completely with the differences of the biological logic in contrast to computer logic.
Precisely. I understand computer logic fairly well, but admit am extremely lacking by way of biological logic, and to be honest they are aeons apart.
Scott Mayers wrote:It is why I pointed out a future necessity to use protein like nano-tech-mechanisms if we are to exactly duplicate our brains. We agree that to 'feel' also infers 'ethics' unlike how Data, from Star Trek or the original Spock, is presumed to lack these in their deducto-logical constructs. For a Data character, particularly, I don't believe is a realistic possibility without a default to emotion. Emotion that leads to morals is a coexisting biproduct of 'feeling' no matter what. It is only about degrees, especially of complexity.
Neurons act as both an inductive mechanism and a deductive one much different that our general computer logic. This is first because each chemical acts as distinct values and so are multivariable in contrast to the simple binary we use in most computing at present. It uses induction in that each neuron is semi-defined to accept signals in the form of neurotransmitters which require a sufficient number of Pro-charged ones over Negative-charged ones that require meeting a 'breakpoint' to set off a charge. It also treats this through time by how often (its frequency) and in which way such pulses create a sensation to us. How we 'feel' is the variant 'colors' of the multitude of possible charge events in these ways. But the ones that are 'empathetic' in frequency and simultaneity between multiple neurons is what makes us conscious as a whole.
Other factors that differ are that for our present solid state computers presently working on AI for this requires multiple processors to represent each neuron independently with a fixed multitude of connections in a complex matrix to simulate the possibilities where neurons usually grow or die. This is why we need a protein or nano-tech approach to mimic chemical logic that includes the fluid (non-solid-state) factors of nature. Another difference is that while our computer processors act as complete formal logics with billions of gates, the opposite is true of neurons. Each neuron acts as only one gate but can be relatively different kinds depending upon their matrix connection with respect to other neurons.
However, these are understood and can technically be done in software, but would be 'slower' and so would act like super [place all your 'really's above here] slow sloths in our present computers this way to model us without the use of a matrix of supercomputers to do this.
Sure, you appear to agree that we can 'simulate' neurons suggesting a mimicking of our sentience, however i shall remain resolute. No matter how well we simulate our intelligence, on such a constructed system true conciousness by way of actual sensory perception as we humans sense will not be accomplished via such methods unless biological methods are used.
At which point, who needs to call it AI - since we have recreated something akin to us..
Blueswing
Posts: 26
Joined: Sun Oct 18, 2015 2:17 pm

Re: Can Robots Be Ethical?

Post by Blueswing »

attofishpi wrote: I mean zero ethics, as in NO ethics, bar what some nerd programmer thought he was placing within the algorithm as some safeguard.
You still are not grasping what i am attempting to convey with respect to programmed cold logic in comparison to actual consciousness. Lets say i write a computer program that controls a crane with no human input. Lets say i code it to perfectly measure the weight of the mass being picked up, and the correct counter balance ready for when the crane picks up the mass and swivels around to drop the load at its destination. This works perfectly for 3 days, then on the fourth day there is heavy wind. Suddenly on day four the crane, having not been programmed to adjust for strong wind, drops the load on poor old Fred, and Fred dies. The crane program had zero ethics from the time it began running...because its a program - cold logic. Nobody in their right mind would have ever suggested the crane program should have ethics!
Cold logic programmed AI is just the same. It is SIMULATED intelligence. It will never empathise, it will never FEEL pain, its nothing but CPU cold logic.
As i stated AI will exist in that form - simulated intelligence - totally unconscious (zero ethics), until it becomes something akin to biological - and ONLY at that point.... the point where it CAN FEEL the PAIN of a hammer hitting it on the hand, feel the warmth of the sun on its face and appreciate these things will it ever be truly conscious and have gained the ability to BE ethical (not just simulate it).
Hi atto,

I liked your example with the crane, and I think you have correctly identified the reason robots can't be ethical, it's because they can't feel such things as pleasure and pain.

I think it's quite strange (and therefore interesting) that many apparently intelligent and well-informed people are unsure about this.
Blueswing
Posts: 26
Joined: Sun Oct 18, 2015 2:17 pm

Re: Can Robots Be Ethical?

Post by Blueswing »

Scott Mayers wrote: How we 'feel' is the variant 'colors' of the multitude of possible charge events in these ways. But the ones that are 'empathetic' in frequency and simultaneity between multiple neurons is what makes us conscious as a whole.
Hi Scott,

What makes us conscious as a whole is the largest outstanding question facing humanity. If you really knew how we feel and what makes us conscious as a whole, the entire world would know the name of Scott Mayers, you could expect multiple Nobel Prizes, great personal wealth and prestige and enduring fame but unfortunately you don't.

Here's something really interesting that we do know about how a certain aspect of memory works: http://www.nimh.nih.gov/news/science-ne ... seen.shtml

But what we know doesn't go anything like as far as what you say.
Scott Mayers
Posts: 2485
Joined: Wed Jul 08, 2015 1:53 am

Re: Can Robots Be Ethical?

Post by Scott Mayers »

Blueswing wrote:
Scott Mayers wrote: How we 'feel' is the variant 'colors' of the multitude of possible charge events in these ways. But the ones that are 'empathetic' in frequency and simultaneity between multiple neurons is what makes us conscious as a whole.
Hi Scott,

What makes us conscious as a whole is the largest outstanding question facing humanity. If you really knew how we feel and what makes us conscious as a whole, the entire world would know the name of Scott Mayers, you could expect multiple Nobel Prizes, great personal wealth and prestige and enduring fame but unfortunately you don't.

Here's something really interesting that we do know about how a certain aspect of memory works: http://www.nimh.nih.gov/news/science-ne ... seen.shtml

But what we know doesn't go anything like as far as what you say.
I'm not concerned about whether the rest of the world appeals to me or not. I'm sufficiently confident in my own to not worry except for how or what others may abuse me for speaking on it. I have proposed this with explanation in many places but it requires a formal degree to even get a foot in the door for most things. Politics are also involved too in many things. So I don't pander to popularity. Even under the threat of torture, if my head still remains unbroken inside, while I might be forced to comply, it wouldn't change what I trust without the logical analysis to put my own mind in question to this.

But you seem to be overtly denying even what I might have to say about it as eternally unprovable. This is more about your own issues, not mine. But I appreciate your input and hope that you'd continue to participate. Thanks for the link. I already concluded this but it may be interesting to read as my own means to determine this is more logically based and/or dependent on some other things I've learned from science and experience.

To help, what particular do you think "doesn't go anything like as far as what say."?
Blueswing
Posts: 26
Joined: Sun Oct 18, 2015 2:17 pm

Re: Can Robots Be Ethical?

Post by Blueswing »

Scott Mayers wrote: I have proposed this with explanation in many places but it requires a formal degree to even get a foot in the door for most things. Politics are also involved too in many things. So I don't pander to popularity.
You've missed the point of what I was saying Scott. If what you were saying about how consciousness works had any credibility, if there was evidence to back it up, then you wouldn't have any problem getting a foot in the door.
But you seem to be overtly denying even what I might have to say about it as eternally unprovable.
I'm not saying anything like that. My point isn't that you don't have any evidence to support your ideas now.
Thanks for the link. I already concluded this...
But you don't have any evidence to reach such a conclusion, so saying that just damages your credibility.
Scott Mayers
Posts: 2485
Joined: Wed Jul 08, 2015 1:53 am

Re: Can Robots Be Ethical?

Post by Scott Mayers »

Blueswing wrote:
Scott Mayers wrote: I have proposed this with explanation in many places but it requires a formal degree to even get a foot in the door for most things. Politics are also involved too in many things. So I don't pander to popularity.
You've missed the point of what I was saying Scott. If what you were saying about how consciousness works had any credibility, if there was evidence to back it up, then you wouldn't have any problem getting a foot in the door.
But you seem to be overtly denying even what I might have to say about it as eternally unprovable.
I'm not saying anything like that. My point isn't that you don't have any evidence to support your ideas now.
Thanks for the link. I already concluded this...
But you don't have any evidence to reach such a conclusion, so saying that just damages your credibility.
How do you know? I wasn't intending to prove this with respect to your appeal one way or the other. But just to be certain, what is it that you think I'm claiming in your own words?
Blueswing
Posts: 26
Joined: Sun Oct 18, 2015 2:17 pm

Re: Can Robots Be Ethical?

Post by Blueswing »

Scott Mayers wrote: How do you know? I wasn't intending to prove this with respect to your appeal one way or the other. But just to be certain, what is it that you think I'm claiming in your own words?
I don't need to use my words Scott, here are the claims you made in your own words:
Scott Mayers wrote: How we 'feel' is the variant 'colors' of the multitude of possible charge events in these ways. But the ones that are 'empathetic' in frequency and simultaneity between multiple neurons is what makes us conscious as a whole.
We don't yet know how we feel, we don't yet know what makes us conscious as a whole.
Ansiktsburk
Posts: 515
Joined: Sat Nov 02, 2013 12:03 pm
Location: Central Scandinavia

Re: Can Robots Be Ethical?

Post by Ansiktsburk »

Blueswing wrote:
attofishpi wrote: I mean zero ethics, as in NO ethics, bar what some nerd programmer thought he was placing within the algorithm as some safeguard.
You still are not grasping what i am attempting to convey with respect to programmed cold logic in comparison to actual consciousness. Lets say i write a computer program that controls a crane with no human input. Lets say i code it to perfectly measure the weight of the mass being picked up, and the correct counter balance ready for when the crane picks up the mass and swivels around to drop the load at its destination. This works perfectly for 3 days, then on the fourth day there is heavy wind. Suddenly on day four the crane, having not been programmed to adjust for strong wind, drops the load on poor old Fred, and Fred dies. The crane program had zero ethics from the time it began running...because its a program - cold logic. Nobody in their right mind would have ever suggested the crane program should have ethics!
Cold logic programmed AI is just the same. It is SIMULATED intelligence. It will never empathise, it will never FEEL pain, its nothing but CPU cold logic.
As i stated AI will exist in that form - simulated intelligence - totally unconscious (zero ethics), until it becomes something akin to biological - and ONLY at that point.... the point where it CAN FEEL the PAIN of a hammer hitting it on the hand, feel the warmth of the sun on its face and appreciate these things will it ever be truly conscious and have gained the ability to BE ethical (not just simulate it).
Hi atto,

I liked your example with the crane, and I think you have correctly identified the reason robots can't be ethical, it's because they can't feel such things as pleasure and pain.

I think it's quite strange (and therefore interesting) that many apparently intelligent and well-informed people are unsure about this.
But the(we) guys who programs those things do feel both pleasure and pain. I cannot say why I feel pleasure and pain. But I can make a computer do things that is ethical. or not. And who cares what people(or computers) feel, it's what they do that matters.

And as for the crane, in version 1.0.2 of the crane software there would have been a trouble report about a crane behaving irregular in wind that have probibited the crane from starting. Old Fred survived. If the human crane driver Percy, that run the crane before it was computerized (and Percy got unemployed) would have been there, he would have started the crane despite the winds, to show his boss that he was eager to do his best. Failing to maneuvre the crane. And killing Fred.
Scott Mayers
Posts: 2485
Joined: Wed Jul 08, 2015 1:53 am

Re: Can Robots Be Ethical?

Post by Scott Mayers »

Blueswing wrote:
Scott Mayers wrote: How do you know? I wasn't intending to prove this with respect to your appeal one way or the other. But just to be certain, what is it that you think I'm claiming in your own words?
I don't need to use my words Scott, here are the claims you made in your own words:
Scott Mayers wrote: How we 'feel' is the variant 'colors' of the multitude of possible charge events in these ways. But the ones that are 'empathetic' in frequency and simultaneity between multiple neurons is what makes us conscious as a whole.
We don't yet know how we feel, we don't yet know what makes us conscious as a whole.
Speak for yourself. You quoted me in limited context to a larger post that restricts meaning to how I was responding or choosing my words. Also, it appears the word "charge" above is a typo of mine as I cannot even interpret this without looking back. The link you provided is one such confirmation of my interpretation likely [I'd have to read it in full to determine for sure].

But we already measure activity of active conscious brain activity by distinguishing 'brain waves'. But you don't need any external measures to be able to interpret what consciousness is nor does it even appear possible to use any external means to 'prove' what I say without some means to both alter our own consciousness to become something else AND then transfer that experience back in our own minds to even recall such a possible experience.

This is a digression into the subject of consciousness though. If you want to we can go there but its separate from the concerns of this thread.
Blueswing
Posts: 26
Joined: Sun Oct 18, 2015 2:17 pm

Re: Can Robots Be Ethical?

Post by Blueswing »

Ansiktsburk wrote: And who cares what people(or computers) feel, it's what they do that matters.
That's an interesting position to take, I wonder if you have any experience of management, or personal relationships of any kind?
Blueswing
Posts: 26
Joined: Sun Oct 18, 2015 2:17 pm

Re: Can Robots Be Ethical?

Post by Blueswing »

Scott Mayers wrote:
Speak for yourself. You quoted me in limited context to a larger post that restricts meaning to how I was responding or choosing my words. Also, it appears the word "charge" above is a typo of mine as I cannot even interpret this without looking back. The link you provided is one such confirmation of my interpretation likely [I'd have to read it in full to determine for sure].
Well Scott, I would suggest that you go back and read what you said so you know what you were talking about, then go and read the article so you know what I am talking about, then come back and take a little more time drafting any response than you did here. Re-read what you have written before you send it, so that you know it makes sense. Unlike this:
Scott Mayers wrote:You quoted me in limited context to a larger post that restricts meaning to how I was responding or choosing my words.

My suggestion is that you should take the time and trouble to write coherently, rather than leaving it to your readers to guess at your meaning.
Scott Mayers
Posts: 2485
Joined: Wed Jul 08, 2015 1:53 am

Re: Can Robots Be Ethical?

Post by Scott Mayers »

Blueswing wrote:
Scott Mayers wrote:
Speak for yourself. You quoted me in limited context to a larger post that restricts meaning to how I was responding or choosing my words. Also, it appears the word "charge" above is a typo of mine as I cannot even interpret this without looking back. The link you provided is one such confirmation of my interpretation likely [I'd have to read it in full to determine for sure].
Well Scott, I would suggest that you go back and read what you said so you know what you were talking about, then go and read the article so you know what I am talking about, then come back and take a little more time drafting any response than you did here. Re-read what you have written before you send it, so that you know it makes sense. Unlike this:
Scott Mayers wrote:You quoted me in limited context to a larger post that restricts meaning to how I was responding or choosing my words.

My suggestion is that you should take the time and trouble to write coherently, rather than leaving it to your readers to guess at your meaning.
What is 'coherent' to one is not to another dependent upon ones' background experiences. Don't assume it my burden to speak with respect to everyone's same linguistic culture. If you have a question, ask it. But you're not interested in that if you're digressing the topic of this thread as you ARE. You want to question my views on consciousness, ask me instead of expecting I even follow what you are questioning. Otherwise you are coming across like I should not even be allowed to speak my views if it is not popular by default everywhere I speak. Then what would be the use of you being here if only to nod at each other. You can use Twitter for that.
Blueswing
Posts: 26
Joined: Sun Oct 18, 2015 2:17 pm

Re: Can Robots Be Ethical?

Post by Blueswing »

Scott Mayers wrote: What is 'coherent' to one is not to another dependent upon ones' background experiences.
Ok, thank you Scott. Well, as I am unwilling to waste my time disentangling your poorly-thought out sentences I doubt if we will be discussing much here.
Locked