Ok. attofishpi stems from a really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really really small fish pie. (i'm pisces - a fish swimming in a sea of light 'waves' ergo pi)Scott Mayers wrote:No, I was asking you about your name given here, "attofishpi". As to the book, I have an upcoming expectation to read "The Hitchhiker's Guide to the Universe" for an upcoming book meet and so have to defer to reading yours at present. [I'm not a big reader, especially of fiction, for the most part these days since my farsightedness is kicking in as is. But remind me again in the future if you can. Thanks.
I think morals stem to a certain extent from the entity in question attaining a level of empathy.Scott Mayers wrote:I find the opposite true but understand what you mean. Where I'm coming from is that morality itself is a construct derived by our emotional drives to survive independently only. A lion with respect to killing us is acting morally normal with respect to its needs; but this is NOT a moral virtue by those that get eaten and is thus the reverse. Therefore, no absolute moral imperative exists to nature for all things. You are interpreting the 'fact' of a relative moral imperative to living things. This latter point is what is true about nature only. There is a big difference.
I mean zero ethics, as in NO ethics, bar what some nerd programmer thought he was placing within the algorithm as some safeguard.Scott Mayers wrote:You mean, "zero ethics" with respect to us as humans only.attofishpi wrote:I agree. The problem is, that the hardware required to truly mimic the human brain is a long way off, and before that becomes a reality...AI will exist in some form, and in that form, where it is not 'thinking like us' as you put it...it could dangerous and it will have zero ethics.
You still are not grasping what i am attempting to convey with respect to programmed cold logic in comparison to actual consciousness. Lets say i write a computer program that controls a crane with no human input. Lets say i code it to perfectly measure the weight of the mass being picked up, and the correct counter balance ready for when the crane picks up the mass and swivels around to drop the load at its destination. This works perfectly for 3 days, then on the fourth day there is heavy wind. Suddenly on day four the crane, having not been programmed to adjust for strong wind, drops the load on poor old Fred, and Fred dies. The crane program had zero ethics from the time it began running...because its a program - cold logic. Nobody in their right mind would have ever suggested the crane program should have ethics!
Cold logic programmed AI is just the same. It is SIMULATED intelligence. It will never empathise, it will never FEEL pain, its nothing but CPU cold logic.
As i stated AI will exist in that form - simulated intelligence - totally unconscious (zero ethics), until it becomes something akin to biological - and ONLY at that point.... the point where it CAN FEEL the PAIN of a hammer hitting it on the hand, feel the warmth of the sun on its face and appreciate these things will it ever be truly conscious and have gained the ability to BE ethical (not just simulate it).
Unless you agree that to be conscious requires the ability to comprehend the senses in the same manner that sentient mammals do, then no.Scott Mayers wrote:Extrapolate or do not extrapolate? I won't attend to what consciousness is here if it is not something you'd evade anyways. But I know that you might find it useful to discuss in another thread if you're not interested here. At least, if you follow the example above for moral assignments, I also have an equally powerful explanation of consciousness too. It's your choice.attofishpi wrote:Lets talk about the robot of this ethics debate. You and a robot are sat beside each other. I ask you both to place your hands upon a table in front of you. I tell you both that i will pay the one that continues to leave his hand upon the table $1000 and the other $zero if he retracts his hand.
You both put your hands upon the table and i pull out a hammer and raise it above your hand.
Who do you think is going to retract his hand?
Why do you think that particular intelligence is going to retract his hand?
Its is because only one of the entities HAS consciousness. The other is just a bunch of electronic switches.
Sounds dandy.Scott Mayers wrote:I'm not sure where you got this. Perhaps by this you are referring to the fact that to any one thing, like a cell, the environment itself connecting it to other cells of course has to be always greater than what it is itself. AI is already doing this. What you might not recognize is that while present technology deals with 'solid state' devices, this is going to graduate to 'chemical state' ones that are flexible, grow, and evolve using things like proteins. We might base them on atoms of silicon rather than carbon to prevent carbon based chemistry from 'eating' them on a molecular basis. But this is realistically perceivable. We just passed the genomic projects goal in recent times. The next goal is to discover how proteins that are the second phase to which the genes themselves inform their creation, to become viable enough for us to create them in a lab. This protein project can be seen now in the form of protein folding projects involving the public.attofishpi wrote:Apparently there are more logical gates within the human brain than atoms within the entire universe. Until AI is permitted the same hardware, and where the hardware is as efficient as the human brain (which has evolved since life began on Earth) it will never have ethics. There are many humans on this planet that show little in the way of ethics. Expecting something artificial to have ethics is a BIG ask. And that BIG ask begins with actual consciousness, something that is capable of actually feeling pain.
Not sure whether i read of saw a doco re there being more logical gates in the human brain than atoms in the universe. Big call tho eh? It comes down to the number of combinations at a binary synapse level with the huge average number of connections each synapse actually has.