Artificial Consciousness: Our Greatest Ethical Challenge
Re: Artificial Consciousness: Our Greatest Ethical Challenge
Artificial consciousness will never be a problem. It'll never even be a thing.
Sophisticated calculating machines are already an ethical problem, but the underlying problem is not the machines. It is our failure to solve any ethical problems. Ethics is metaphysics and so in our academic world ethical values are just a matter of opinion. Our greatest ethical challenge is ethics.
Sophisticated calculating machines are already an ethical problem, but the underlying problem is not the machines. It is our failure to solve any ethical problems. Ethics is metaphysics and so in our academic world ethical values are just a matter of opinion. Our greatest ethical challenge is ethics.
Re: Artificial Consciousness: Our Greatest Ethical Challenge
Ethics is solved in practice - just ask any doctor, aircraft engineer, risk manager, or any practitioner who makes life&death choices as part of their profession.PeteJ wrote: ↑Mon Jul 01, 2019 5:01 pm Sophisticated calculating machines are already an ethical problem, but the underlying problem is not the machines. It is our failure to solve any ethical problems. Ethics is metaphysics and so in our academic world ethical values are just a matter of opinion. Our greatest ethical challenge is ethics.
The guiding principle is Primum non nocere.
Every practitioner understands what 'harm' is in their particular area of expertise, and the Precautionary principle guides our policy-level decision making when dealing with social-scale existential risks.
We just can't explain the concept of 'harm' to a machine like we can explain it to a human.
Science is what we understand well enough to explain to a computer. Art is everything else we do. --Donald Knuth
- Immanuel Can
- Posts: 27622
- Joined: Wed Sep 25, 2013 4:42 pm
Re: Artificial Consciousness: Our Greatest Ethical Challenge
Here's my problem with this article.
The author writes,
...neuroscience seems to suggest that our entire conscious experience originates from our neural activity....whatever consciousness is, its origin is physical, in the brain and central nervous system....human consciousness somehow arises from configurations of unconscious atoms....
What is this "neuroscience" he is speaking of? He does not say. He also does not say that his conclusion from it is anything more than something that "seems" to him to be the case. He seems to realize he does not "know" it, nor does he give us any reasons to see why this "seeming" is so compelling to him, or why we should think it's true.
But then he takes it as a total, unquestionable given, and premises the whole rest of his argument on that, as if it were certain. So he continues...
Assuming, then, that we can come to create consciousness digitally...
In this, he swallows two big camels in order to choke on the moral gnat of AI ethics.
There are two massive, massive problems with the supposition that consciousness originates with material physiology, and neither of them has, of yet, come close to being solved. The first, we might call the "emergence" problem: how does a thing like "consciousness" emerge progressively from non-sentient (and indeed, non-living) matter? The second is even more serious: it's the "downward causality" problem. (Jaegwon Kim is the leading voice on this one right now.) That's the problem of how something that is thought to have "emerged" from the physical can then be said to turn around and dictated TO the physical, as when we say, "I've made up my mind to do...(something)." How can this so-called "emerged" property, which is thought to be entirely an "epiphenomenon" of the physical, then be telling the physical what to do? Where is the real locus of this volition?
Nobody has adequate answers for these things, currently; and yet the author seems blissfully unaware there are any problems in attributing consciousness to mere physicality. Yet without this supposition, his worry about the "inhuman rights" of machines is just not something we have reasons to share. For as he says, these AI wonders have not happened yet, and he's given us no reason to think they ever will -- or ever could.
That all being said, I liked what he developed...I just wish he'd set it on firm ground before tying to make the point.
The author writes,
...neuroscience seems to suggest that our entire conscious experience originates from our neural activity....whatever consciousness is, its origin is physical, in the brain and central nervous system....human consciousness somehow arises from configurations of unconscious atoms....
What is this "neuroscience" he is speaking of? He does not say. He also does not say that his conclusion from it is anything more than something that "seems" to him to be the case. He seems to realize he does not "know" it, nor does he give us any reasons to see why this "seeming" is so compelling to him, or why we should think it's true.
But then he takes it as a total, unquestionable given, and premises the whole rest of his argument on that, as if it were certain. So he continues...
Assuming, then, that we can come to create consciousness digitally...
In this, he swallows two big camels in order to choke on the moral gnat of AI ethics.
There are two massive, massive problems with the supposition that consciousness originates with material physiology, and neither of them has, of yet, come close to being solved. The first, we might call the "emergence" problem: how does a thing like "consciousness" emerge progressively from non-sentient (and indeed, non-living) matter? The second is even more serious: it's the "downward causality" problem. (Jaegwon Kim is the leading voice on this one right now.) That's the problem of how something that is thought to have "emerged" from the physical can then be said to turn around and dictated TO the physical, as when we say, "I've made up my mind to do...(something)." How can this so-called "emerged" property, which is thought to be entirely an "epiphenomenon" of the physical, then be telling the physical what to do? Where is the real locus of this volition?
Nobody has adequate answers for these things, currently; and yet the author seems blissfully unaware there are any problems in attributing consciousness to mere physicality. Yet without this supposition, his worry about the "inhuman rights" of machines is just not something we have reasons to share. For as he says, these AI wonders have not happened yet, and he's given us no reason to think they ever will -- or ever could.
That all being said, I liked what he developed...I just wish he'd set it on firm ground before tying to make the point.
Re: Artificial Consciousness: Our Greatest Ethical Challenge
Skepdick - My point was that an ethical scheme must have a foundation in metaphysics. The empathy on which your explanation and our general behaviour depends may be explained away as projection, while agreeing not to harm each other is just a sensible rule for a club.
For an ethical theory we would have to follow Schopenhauer and explain this empathy as the 'breakthrough of a metaphysical truth', the truth being the shared identity of all sentient beings. This is a fundamental theory of ethics and metaphysics. It explains why the principle of non-harm appeals to us, the empathy it embodies, and puts it on a firm philosophical foundation.
The principles for behaviour that you mention are not wrong or poorly thought through. Charles Kingsley's rule ''Do as you would be done by' might be enough for most ethically significant situations to guide our behaviour but it is not a theory of ethics.
For a theory of ethics we need a fundamental theory, and this is why ethics is part of metaphysics,
Thus, or so it seems to me, ethics is our challenge, not the ethics of this or that.
For an ethical theory we would have to follow Schopenhauer and explain this empathy as the 'breakthrough of a metaphysical truth', the truth being the shared identity of all sentient beings. This is a fundamental theory of ethics and metaphysics. It explains why the principle of non-harm appeals to us, the empathy it embodies, and puts it on a firm philosophical foundation.
The principles for behaviour that you mention are not wrong or poorly thought through. Charles Kingsley's rule ''Do as you would be done by' might be enough for most ethically significant situations to guide our behaviour but it is not a theory of ethics.
For a theory of ethics we need a fundamental theory, and this is why ethics is part of metaphysics,
Thus, or so it seems to me, ethics is our challenge, not the ethics of this or that.
Re: Artificial Consciousness: Our Greatest Ethical Challenge
Why?
Why does ethics need foundation in metaphysics, while metaphysics stands on its own feet. Foundationless.
Surely we need a good theory of metaphysics before we consider it as a foundation of any sort?
And it seems to me that your challenge is a THEORY of ethics, not ethics itself.
As a practitioners of ethics I haven't encountered any such things as 'theoretical problems'.
The challenge of ethics is complexity and precision. The human inability to calculate Nth order side-effects so as to avoid externalities.
The challenge of ethics is the inability to control all the variables to satisfactory optimum for all the stakeholders.
The challenge of ethics is the lack of collective human foresight to agree on issues pertaining to tragedies of the commons.
Re: Artificial Consciousness: Our Greatest Ethical Challenge
Spot on. An ethical theory must be a metaphysical theory. But metaphysics is not foundationless. The entire purpose of metaphysics is to build the foundations. It so happens that most philosophers fail in the endeavour but this is not a weakness of metaphysics. Not all of them fail.
I don't quite understand your point here. A metaphysical theory would be a theory of ethics. Thus the metaphysical conjecture that is Materialism states that ethics is a pragmatic human invention with no basis in ontology or epistemology. Good or bad, this is an ethical theory, being grounded in metaphysics.And it seems to me that your challenge is a THEORY of ethics, not ethics itself.
As a practitioners of ethics I haven't encountered any such things as 'theoretical problems'.
The challenge of ethics is complexity and precision. The human inability to calculate Nth order side-effects so as to avoid externalities.
The challenge of ethics is the inability to control all the variables to satisfactory optimum for all the stakeholders.
The challenge of ethics is the lack of collective human foresight to agree on issues pertaining to tragedies of the commons.
An ethical theory would explain ethics as a consequence of the nature of Reality, thus independent of what human beings happen think about ethics. A set of rules for behaviour is not an ethical theory. An ethical theory would be required to justify the rules.
i would concede that that sometimes we use the word 'theory' to mean a set of rules or practices, as in 'music theory'. But a theory should be more than a a list of rules. Otherwise we'd have to call the Ten Commandments an ethical theory. They are a basis for an ethical system but do not comprise an ethical theory.
At a minimum an ethical theory must state that there either is or is-not an ethical God or 'Leviathon'. Thus it has to be metaphysical.
Re: Artificial Consciousness: Our Greatest Ethical Challenge
OK. So you are conceptualising metaphysics in the same way I conceptualise epistemology. Towards the goal of epistemic model-building we need tools. LEGO bricks with which we construct our map of reality.PeteJ wrote: ↑Wed Jul 03, 2019 10:28 am Spot on. An ethical theory must be a metaphysical theory. But metaphysics is not foundationless. The entire purpose of metaphysics is to build the foundations. It so happens that most philosophers fail in the endeavour but this is not a weakness of metaphysics. Not all of them fail.
And my instrument (my LEGO) is Intuitionistic logic.
Otherwise known as constructive logic. Which I use to construct all manner of useful things.
Which is why I say that 'logic is metaphysics', and why I also say that 'metaphysics is The Construct' (where you build/test ideas).
Quite literally - constructing knowledge (tacit, HOW-TO knowledge) is like writing software as far as I am concerned.
Not to me. We construct models of reality (with our metaphysic). Those models are just an instruments for 'understanding' and 'measuring' complex reality. Ultimately though ethics lies in decision-making. Strategy/tactics.
Informed choice and ACTION congruent with Primum non nocere given all the available information.
Of course. Mathematics is man-made. Logic is man-made.
OK. Reality is trying to kill us. It has a track record with 99.99% success when it comes to all living organisms.
OK. How about this? After you've done some applied ethics for a few decades and you have some tacit and empirical understanding of ethics, go and write the theory.
Re: Artificial Consciousness: Our Greatest Ethical Challenge
How can you have an applied ethics when you don't have a theory?
I don't need to re-invent the wheel. I subscribe to the Perennial view of ethics, which is grounded in metaphysics.
If you check the definitions you'll see that Ethics is categorised as part of Metaphysics. There is a reason for this. Obviously you don't buy it, but this doesn't change anything.
Re: Artificial Consciousness: Our Greatest Ethical Challenge
I don't understand the question.
Experience/empiricism/practice always comes before theory. How else would you acquire any knowledge worth theorising about?
How else are you going to learn that most theories fail in practice?
You certainly aren't going to cure cancer by theorising about it...
'Grounded in metaphysics' is an oxymoron, unless you ground metaphysics.
My metaphysic is grounded in computation. Which is physics.
Sounds like category error to me...
And obviously you don't buy it that theory doesn't matter in practice, but that doesn't change anything either.
Re: Artificial Consciousness: Our Greatest Ethical Challenge
Yes. Does this need saying?
Yes. Your metaphysics has no theoretical grounding. That's fine. Most people are in the same position.My metaphysic is grounded in computation. Which is physics.
is is not grounded in theory.
Quite.
Yes, I know this. I was pointing out that your view is idiosyncratic.Sounds like category error to me...
A decent theory has practical implications. This is the whole point of a theory. Your ethics is just pragmatism and Kant's CI is enough for this.And obviously you don't buy it that theory doesn't matter in practice, but that doesn't change anything either.
Let's leave it here. We aren't going to make progress while you defend a view that contradicts almost all philosophers.
Re: Artificial Consciousness: Our Greatest Ethical Challenge
The study of complex systems has grounding in Physics. And Mathematics. And Linguistics. And Statistics. And Computer Science. And Game Theory. And Chaos theory. And Biology. And Engineering. And Marketing. And...and...and... It has theoretical backing up the wazoo. It's empirical. Which is way more than you can say about your metaphysic, I imagine?
The APPLICATION of systems thinking is not grounded in theory. It's grounded in teleology.
But it works. And that's sufficient for it to justify itself.
DUH! Because a decent theory necessitates some prior empirical learning e.g practice, not mere theorising.
The theory is only necessary so that I can explain my knowledge to you. I don't need to explain my knowledge to me.
JUST pragmatism? What else is it suppose to be when we speak of teleology?
Ironically, the CI doesn't work in practice. Kant speaks of 'universality' yet is completely ignorant of scale invariance which is a mandatory property for universality in dynamic systems. The world is wee bit more complex than best intentions. What works at individual scale, hardly ever works at social scale...
Precisely my purpose. Philosophers are idiots. I am defending a view requires more than just philosophical pontification to arrive at.
It is astonishing to see how many philosophical disputes collapse into insignificance the moment you subject them to this simple test of tracing a concrete consequence. There can be no difference anywhere that doesn’t make a difference elsewhere – no difference in abstract truth that doesn’t express itself in a difference in concrete fact and in conduct consequent upon that fact, imposed on somebody, somehow, somewhere, and somewhen. The whole function of philosophy ought to be to find out what definite difference it will make to you and me, at definite instants of our life, if this world-formula or that world-formula be the true one. --William James
Re: Artificial Consciousness: Our Greatest Ethical Challenge
I would agree with James. He does not reject philosophy, merely the inanity of most philosophical disputes. Kant does the same when he calls Western metaphysics an 'arena for mock fights'. Neither would reject philosophy, just what passes for it in many quarters. I share their view.
You're just inventing another philosophical view, a conjecture with no roots or grounding, so I wouldn't assume James in on your side. Kant's comment is the most interesting since it points to a solution for metaphysics and so also for ethics, but you dismiss it, and even the need for it, so I won't go there.
I presume you endorse scientism and so his forum must drive you nuts.
You're just inventing another philosophical view, a conjecture with no roots or grounding, so I wouldn't assume James in on your side. Kant's comment is the most interesting since it points to a solution for metaphysics and so also for ethics, but you dismiss it, and even the need for it, so I won't go there.
I presume you endorse scientism and so his forum must drive you nuts.
Re: Artificial Consciousness: Our Greatest Ethical Challenge
I am sure you meant that metaphorically. The ground is my grounding.
I am coming from the trenches to report on what actually works. In practice.
And I am reporting that practically all theories fail given enough time. Due to reasons we are all too familiar with, vagueness, incompleteness, hidden variables, false assumptions, over-simplification of complex matters, over-complication of simple matters, luck. The number of ways things can go wrong is endless... You could say that I am a student of systemic failure.
Ivory Tower Philosophy knows nothing of positive AND negative feedback loops. Real-world indicators that you are busy fucking up.
Actual, tangible consequences.
No. I don't. I endorse science. I endorse positive and negative empirical validation of hypotheticals. I endorse trust-but-verify.
This forum only drives me nuts when people are too incompetent to tell the difference between science and scientism.
Q.E.D
The first principle is that you must not fool yourself – and you are the easiest person to fool. --Richard Feynman
-
jayjacobus
- Posts: 1273
- Joined: Wed Jan 27, 2016 9:45 pm
Re: Artificial Consciousness: Our Greatest Ethical Challenge
Before Artificial Con. gets to be an ethical challenge, it will be the computer scientist's Greatest challenge.
I say, "It cannot be done" but the computer scientists will say "We are almost there."
A miss is as good as mile. But the hype precedes the success by a few humdred years.
I say, "It cannot be done" but the computer scientists will say "We are almost there."
A miss is as good as mile. But the hype precedes the success by a few humdred years.
Re: Artificial Consciousness: Our Greatest Ethical Challenge
You miss the point entirely. We are already there.jayjacobus wrote: ↑Wed Jul 03, 2019 11:00 pm Before Artificial Con. gets to be an ethical challenge, it will be the computer scientist's Greatest challenge.
I say, "It cannot be done" but the computer scientists will say "We are almost there."
A miss is as good as mile. But the hype precedes the success by a few humdred years.
We have automata with the power to take human life: https://en.wikipedia.org/wiki/SGR-A1
We already have self-driving cars which will at some point in their existence have to decide whether to run over a pedestrian, or drive its occupants into a wall. Somebody is going to have to write that logic. What is the "right" choice?
We have aircraft sofware (Boeing MAX737 fiasco) which "lies" to its pilots. 300 bodies to show for it.
We have AI learning to cheat.
The notion of 'ethics' is married to the notion of 'control'. The power to dictate what happens. When software behaves non-deterministically (be it a bug or a feature) we are NOT in control.
Caveat emptor.