You know it's hard to take you seriously when you are concerned about what entropy will do to us in a few ten billion years. But you are unconcerned about current major problems.TimeSeeker wrote: ↑Tue Sep 18, 2018 2:42 pm Which part of dying/extinction is ambiguous to you? Is dying a reification fallacy?
What could make morality objective?
Re: What could make morality objective?
-
TimeSeeker
- Posts: 2866
- Joined: Tue Sep 11, 2018 8:42 am
Re: What could make morality objective?
If pedantry is what you are after then you are also guilty of the same fallacy. Machine learning algorithms do not have "agency". They never will. They have goals. Success/Failure criteria.
Agency is not required for harm to occur. You are pre-supposing "malice". A benevolent AI is as dangerous to humans as a malevolent one.
Your stupid is making me angry.
https://wiki.lesswrong.com/wiki/Paperclip_maximizer
-
TimeSeeker
- Posts: 2866
- Joined: Tue Sep 11, 2018 8:42 am
Re: What could make morality objective?
If they are not existential risks to HUMANITY they are not "major" problems. If you integrate the time-function of continued human existence you will saves more lives than preventing all human wars. The good of the many outweighs the good of the few - Individuals are expendable.Atla wrote: ↑Tue Sep 18, 2018 2:51 pmYou know it's hard to take you seriously when you are concerned about what entropy will do to us in a few ten billion years. But you are unconcerned about current major problems.TimeSeeker wrote: ↑Tue Sep 18, 2018 2:42 pm Which part of dying/extinction is ambiguous to you? Is dying a reification fallacy?
You too have fallen for the trap of temporal discounting.
Objective morality exists. The no harm principle. https://en.wikipedia.org/wiki/Precautionary_principle
The principle implies that there is a social responsibility to protect the public from exposure to harm, when scientific investigation has found a plausible risk. These protections can be relaxed only if further scientific findings emerge that provide sound evidence that no harm will result.
Last edited by TimeSeeker on Tue Sep 18, 2018 3:09 pm, edited 2 times in total.
- Immanuel Can
- Posts: 27612
- Joined: Wed Sep 25, 2013 4:42 pm
Re: What could make morality objective?
The one of us who does not believe in objective right and wrong...Dubious wrote: ↑Tue Sep 18, 2018 3:59 amNo contradictions; my logic is perfectly clear on the subject.Immanuel Can wrote: ↑Tue Sep 18, 2018 1:17 am Like I say, Dube: it's not me you have the contradiction with...it's yourself.
...says I am wrong.
...says I am doing wrong.
...says I am being the wrong kind of person.
...says I have wronged him.
...insists he is right.
...says I do not respond in the right way.
...says that it is not right of me to be what I am.
...rages and insults, because he feels I have violated his rights.
...believes that right-thinking people ought to agree with him.
But there is no contradiction.
Is that right?
-
TimeSeeker
- Posts: 2866
- Joined: Tue Sep 11, 2018 8:42 am
Re: What could make morality objective?
That's the exact same behavior you display. It is not a contradiction. It is the will to power.Immanuel Can wrote: ↑Tue Sep 18, 2018 3:06 pmThe one of us who does not believe in objective right and wrong...Dubious wrote: ↑Tue Sep 18, 2018 3:59 amNo contradictions; my logic is perfectly clear on the subject.Immanuel Can wrote: ↑Tue Sep 18, 2018 1:17 am Like I say, Dube: it's not me you have the contradiction with...it's yourself.
...says I am wrong.
...says I am doing wrong.
...says I am being the wrong kind of person.
...says I have wronged him.
...insists he is right.
...says I do not respond in the right way.
...says that it is not right of me to be what I am.
...rages and insults, because he feels I have violated his rights.
...believes that right-thinking people ought to agree with him.
But there is no contradiction.![]()
Is that right?
-
Peter Holmes
- Posts: 4134
- Joined: Tue Jul 18, 2017 3:53 pm
Re: What could make morality objective?
To expand on what I said earlier: are any of the solutions to the so-called problems of meaning and reference themselves non-linguistic? Do you think we can get behind, beneath, through or beyond language by means of language - natural or formal?TimeSeeker wrote: ↑Tue Sep 18, 2018 2:00 pmYou seem to be trying to ignore the distinction I am drawing and missing the mark. They are not "metalanguages". There is something fundamentally different about the Chomsky hierarchy.Peter Holmes wrote: ↑Tue Sep 18, 2018 1:54 pm You seem to be looking for something to disagree with - and missing the mark.
We can produce reasonable (and at least social-evolutionary) explanations for why we invented formal languages, which are just metalanguages. We didn't invent the natural languages I'm referring to, any more than other species invented the sophisticated communication codes they've developed.
Lambda calculus solves the problem of meaning and the symbol-grounding problem. https://en.wikipedia.org/wiki/Symbol_grounding_problem
It solves the problems of reduction which plagues natural languages: https://en.wikipedia.org/wiki/Reduction_(complexity)
Through computation (interpretation) these languages have direct effect on reality - automation/robotics/AI. They have agency and they become prescriptive rather than descriptive.
This leads directly to the ethical problems around friendly AI. Bostrom and Yudkowski's work. This is the scariest thought experiment you probably havent considered: https://wiki.lesswrong.com/wiki/Paperclip_maximizer
If you don't see that as a significant/consequential distinction and it's all "just language" then I am happy to acknowledge your interest in philosophy as purely academic.
My interest in philosophy isn't solely academic - though I wouldn't be ashamed if it were. But you seem to be grinding some utterly irrelevant axe about how we or the machines we make do or will use language to destroy everything. Non-sequitur.
Re: What could make morality objective?
I'm not after pedantry, I'm not a philosopher, and it wouldn't be the same fallacy.TimeSeeker wrote: ↑Tue Sep 18, 2018 2:57 pmIf pedantry is what you are after then you are also guilty of the same fallacy.
Well, now, yeah. A sufficiently advanced AI could develop learning algorithms with what could be called agency though.Machine learning algorithms do not have "agency". They never will. They have goals. Success/Failure criteria.
I didn't pre-suppose malice, where did I write that?Agency is not required for harm to occur. You are pre-supposing "malice". A benevolent AI is as dangerous to humans as a malevolent one.
A malevolent AI would most likely be more dangerous though than a benevolent one, obviously.
I don't quite see how wiping out humanity isn't an existential threat to humanity. If we are extinct at time X, we are also extinct at time X+Y.If they are not existential risks to HUMANITY they are not "major" problems. If you integrate the time-function of continued human existence you will saves more lives than preventing all human wars.
You too have fallen for the trap of temporal discounting.
Of course no objective morality can ever exist, no idea what you're saying. But we can agree on a no harm principle.Objective morality exists. The no harm principle.
Maybe if you would communicate your ideas in a saner way?
-
TimeSeeker
- Posts: 2866
- Joined: Tue Sep 11, 2018 8:42 am
Re: What could make morality objective?
Holy shit. More red herrings! FOCUS. There are problems and then there are PROBLEMS.Peter Holmes wrote: ↑Tue Sep 18, 2018 3:15 pm To expand on what I said earlier: are any of the solutions to the so-called problems of meaning and reference themselves non-linguistic? Do you think we can get behind, beneath, through or beyond language by means of language - natural or formal?
We are on a thread called "What could make morality objective?"Peter Holmes wrote: ↑Tue Sep 18, 2018 3:15 pm My interest in philosophy isn't solely academic - though I wouldn't be ashamed if it were. But you seem to be grinding some utterly irrelevant axe about how we or the machines we make do or will use language to destroy everything. Non-sequitur.
I am pointing to you that AI is an EXISTENTIAL THREAT to HUMANITY.
And you think it's a non-sequitur?
If you conception of "objective morality" has no overlap with "human survival" I think you've missed the forest for the trees.
I don't often go to such lengths - but people who think like that truly deserve to be expelled from society. You are stealing air.
Last edited by TimeSeeker on Tue Sep 18, 2018 3:45 pm, edited 4 times in total.
-
TimeSeeker
- Posts: 2866
- Joined: Tue Sep 11, 2018 8:42 am
Re: What could make morality objective?
Call it rainbow-farting Unicorns if you think it's so important. The fact that is is GOING TO KILL US. Is the part that matters most. Do you have Attention deficit disorder perhaps?
I have given you an "objective morality" utility-function - human survival. Therefore preventing human harm/extinction is the No.1 priority for humanity. Therefore moral action is preventing extinction.
Do you disagree with it because it's not "objective" enough?
Otherwise your "belief in global warming" is immaterial. Yes - global warming is real. So what? It's not like we are optimising for survival or anything.
Re: What could make morality objective?
It's only going to kill us if we let it have the ability to kill us, and something goes wrong. AI is top 5 in my book too btw, danger-wise.TimeSeeker wrote: ↑Tue Sep 18, 2018 3:27 pmCall it rainbow-farting Unicorns if you think it's so important. The fact that is is GOING TO KILL US. Is the part that matters most. Do you have Attention deficit disorder perhaps?
If you would communicate in a way that others can understand, maybe you would find that they actually have something to say about the topic.
There is no such thing as objective morality. But imo preventing human harm/extinction is one of the best moralities humanity could try to agree on in the future, and then maybe pretend that it's objective.I have given you an "objective morality" utility-function - human survival. Therefore preventing human harm/extinction is the No.1 priority for humanity. Therefore moral action is preventing extinction.
Do you disagree with it because it's not "objective" enough?
How does this follow from the previous, and what do you mean by immaterial?Otherwise your "belief in global warming" is immaterial. Yes - global warming is real. So what? It's not like we are optimising for survival or anything.
Btw I didn't mention global warming, and that's not an existential threat.
-
TimeSeeker
- Posts: 2866
- Joined: Tue Sep 11, 2018 8:42 am
Re: What could make morality objective?
How do you prevent it from having the ability to kill us? How do we exercise control?
There we go again with the semantics again.
Yes - there is no such thing as objectivity.We made it up. There is no such thing as morality (right and wrong) either. We, humans, made it up too.
We can get away from all the bickering and suffice for me to point you to https://en.wikipedia.org/wiki/Construct ... istemology
But in the context of the INVENTED notions of "objectivity" and "morality", do you think avoiding extinction qualifies as "objective morality"? The most noble/important human pursuit? Or do you think there is something more important than that?
Why would anyone care if global warming is "real" or "not real" if I they care about continued human survival?
You don't think the domino effect of global warming, might cause uncontrollable positive feedback loops ( https://en.wikipedia.org/wiki/Positive_feedback ) in the ecosystem leading to a systemic collapse of Earth's biosphere is an potential existential threat?
Re: What could make morality objective?
By not giving it the tools/ability to kill us?
Why do you have such contempt for the "philosophers", when you keep twisting the semantics too like some of those philosophers?There we go again with the semantics again.
Yes - there is no such thing as objectivity.We made it up. There is no such thing as morality (right and wrong) either. We, humans, made it up too.
We can get away from all the bickering and suffice for me to point you to https://en.wikipedia.org/wiki/Construct ... istemology
But in the context of the INVENTED notions of "objectivity" and "morality", do you think avoiding extinction qualifies as "objective morality"? The most noble/important human pursuit? Or do you think there is something more important than that?
Yes, if we pretend that there is objective morality, then survival/thriving is a good morality, OBVIOUSLY.
I don't understand.Why would anyone care if global warming is "real" or "not real" if I they care about continued human survival?
Pfft of course not, global warming by itself could never wipe out humanity. Over time it could wipe out the majority though.You don't think the domino effect of global warming, might cause uncontrollable positive feedback loops ( https://en.wikipedia.org/wiki/Positive_feedback ) in the ecosystem leading to a systemic collapse of Earth's biosphere is an potential existential threat?
-
TimeSeeker
- Posts: 2866
- Joined: Tue Sep 11, 2018 8:42 am
Re: What could make morality objective?
HOW? People have been working on this for 20 years. Do you think they are idiots to not have thought of this sooner?
Because philosophy is tool. To solve HUMAN, not linguistic/semantic problems. At least that's what I am using it for. And because I do believe in objective morality - I use the definition/meaning which points directly at the problem. The elephant in the room. Human extinction.
OK. Do you want to pretend this is objective morality or do you have any objections to the conception?
Do you care if global warming is real. And if yes - why do you care?
Over what period of time do you think it can become an existential threat? Do you think the rate of change will remain linear, or could it become exponential ?
Last edited by TimeSeeker on Tue Sep 18, 2018 4:29 pm, edited 1 time in total.
Re: What could make morality objective?
Are we forced to let advanced AI handle nuclear weapons and such?TimeSeeker wrote: ↑Tue Sep 18, 2018 4:17 pmHOW? People have been working on this for 20 years. Do you think they are idiots to not have thought of this sooner?
Well I think most people mean something more by philosophy.Because philosophy is tool. To solve HUMAN problems. At least that's what I an using it for.
I don't want to pretend it since I prefer clarity and we are on a philosophy forum.OK. Do you want to pretend this is objective morality or do you have any objections to the conception?
I think never. Global warming by itself could never wipe out humanity.Over what period of time do you think it can become an existential threat?
Which isn't to stay that we shouldn't try stop it immediately. For the maximum welfare for humanity and all life on the planet.
-
TimeSeeker
- Posts: 2866
- Joined: Tue Sep 11, 2018 8:42 am
Re: What could make morality objective?
How do we prevent it from taking control away from humans? It doesn't even have to target nuclear weapons.
It can go for our infrastructure - power, water. Food supply.
We haven't figured out how to secure a single computer system.
No idea what people think philosophy is or isn't.
No philosopher I've ever spoken to can give me a clear objective/success/failure criteria for "philosophy".
I can give you clear objective/success/failure criteria for science: prediction and control of the environment.
I can give you clear objective/success/failure criteria for morality also: avoiding human extinction
Do you want to play the "define extinction" game?
What are your objective success/failure criteria for "clarity" ? How do you know/verify/confirm/recognize that you have attained clarity?
I don't understand what you are saying. If global warming isn't a threat to humanity how is stopping global warming maximising welfare for humanity?
I also don't understand what you mean by "global warming by itself". It's part of a (very!) complex system that is the Eearth's biosphere. We neither understand how it works, nor have any control over it if it starts behaving in erratic manner.
Last edited by TimeSeeker on Tue Sep 18, 2018 4:41 pm, edited 1 time in total.