Page 31 of 715

Re: What could make morality objective?

Posted: Tue Sep 18, 2018 2:51 pm
by Atla
TimeSeeker wrote: Tue Sep 18, 2018 2:42 pm Which part of dying/extinction is ambiguous to you? Is dying a reification fallacy?
You know it's hard to take you seriously when you are concerned about what entropy will do to us in a few ten billion years. But you are unconcerned about current major problems.

Re: What could make morality objective?

Posted: Tue Sep 18, 2018 2:57 pm
by TimeSeeker
Atla wrote: Tue Sep 18, 2018 2:50 pm Yes AI is going to kill us, entropy is going to kill us etc, let's digitize ourselves and go to space.

Formal languages have no agency, that's a reification fallacy. We will be able to program AI with "agency" in the future though, or they may develop one.
If pedantry is what you are after then you are also guilty of the same fallacy. Machine learning algorithms do not have "agency". They never will. They have goals. Success/Failure criteria.
Agency is not required for harm to occur. You are pre-supposing "malice". A benevolent AI is as dangerous to humans as a malevolent one.

Your stupid is making me angry.

https://wiki.lesswrong.com/wiki/Paperclip_maximizer

Re: What could make morality objective?

Posted: Tue Sep 18, 2018 2:59 pm
by TimeSeeker
Atla wrote: Tue Sep 18, 2018 2:51 pm
TimeSeeker wrote: Tue Sep 18, 2018 2:42 pm Which part of dying/extinction is ambiguous to you? Is dying a reification fallacy?
You know it's hard to take you seriously when you are concerned about what entropy will do to us in a few ten billion years. But you are unconcerned about current major problems.
If they are not existential risks to HUMANITY they are not "major" problems. If you integrate the time-function of continued human existence you will saves more lives than preventing all human wars. The good of the many outweighs the good of the few - Individuals are expendable.

You too have fallen for the trap of temporal discounting.

Objective morality exists. The no harm principle. https://en.wikipedia.org/wiki/Precautionary_principle
The principle implies that there is a social responsibility to protect the public from exposure to harm, when scientific investigation has found a plausible risk. These protections can be relaxed only if further scientific findings emerge that provide sound evidence that no harm will result.

Re: What could make morality objective?

Posted: Tue Sep 18, 2018 3:06 pm
by Immanuel Can
Dubious wrote: Tue Sep 18, 2018 3:59 am
Immanuel Can wrote: Tue Sep 18, 2018 1:17 am Like I say, Dube: it's not me you have the contradiction with...it's yourself.
No contradictions; my logic is perfectly clear on the subject.
The one of us who does not believe in objective right and wrong...

...says I am wrong.

...says I am doing wrong.

...says I am being the wrong kind of person.

...says I have wronged him.

...insists he is right.

...says I do not respond in the right way.

...says that it is not right of me to be what I am.

...rages and insults, because he feels I have violated his rights.

...believes that right-thinking people ought to agree with him.

But there is no contradiction. :shock:

Is that right?

Re: What could make morality objective?

Posted: Tue Sep 18, 2018 3:07 pm
by TimeSeeker
Immanuel Can wrote: Tue Sep 18, 2018 3:06 pm
Dubious wrote: Tue Sep 18, 2018 3:59 am
Immanuel Can wrote: Tue Sep 18, 2018 1:17 am Like I say, Dube: it's not me you have the contradiction with...it's yourself.
No contradictions; my logic is perfectly clear on the subject.
The one of us who does not believe in objective right and wrong...

...says I am wrong.

...says I am doing wrong.

...says I am being the wrong kind of person.

...says I have wronged him.

...insists he is right.

...says I do not respond in the right way.

...says that it is not right of me to be what I am.

...rages and insults, because he feels I have violated his rights.

...believes that right-thinking people ought to agree with him.

But there is no contradiction. :shock:

Is that right?
That's the exact same behavior you display. It is not a contradiction. It is the will to power.

Re: What could make morality objective?

Posted: Tue Sep 18, 2018 3:15 pm
by Peter Holmes
TimeSeeker wrote: Tue Sep 18, 2018 2:00 pm
Peter Holmes wrote: Tue Sep 18, 2018 1:54 pm You seem to be looking for something to disagree with - and missing the mark.

We can produce reasonable (and at least social-evolutionary) explanations for why we invented formal languages, which are just metalanguages. We didn't invent the natural languages I'm referring to, any more than other species invented the sophisticated communication codes they've developed.
You seem to be trying to ignore the distinction I am drawing and missing the mark. They are not "metalanguages". There is something fundamentally different about the Chomsky hierarchy.

Lambda calculus solves the problem of meaning and the symbol-grounding problem. https://en.wikipedia.org/wiki/Symbol_grounding_problem
It solves the problems of reduction which plagues natural languages: https://en.wikipedia.org/wiki/Reduction_(complexity)

Through computation (interpretation) these languages have direct effect on reality - automation/robotics/AI. They have agency and they become prescriptive rather than descriptive.

This leads directly to the ethical problems around friendly AI. Bostrom and Yudkowski's work. This is the scariest thought experiment you probably havent considered: https://wiki.lesswrong.com/wiki/Paperclip_maximizer

If you don't see that as a significant/consequential distinction and it's all "just language" then I am happy to acknowledge your interest in philosophy as purely academic.
To expand on what I said earlier: are any of the solutions to the so-called problems of meaning and reference themselves non-linguistic? Do you think we can get behind, beneath, through or beyond language by means of language - natural or formal?

My interest in philosophy isn't solely academic - though I wouldn't be ashamed if it were. But you seem to be grinding some utterly irrelevant axe about how we or the machines we make do or will use language to destroy everything. Non-sequitur.

Re: What could make morality objective?

Posted: Tue Sep 18, 2018 3:15 pm
by Atla
TimeSeeker wrote: Tue Sep 18, 2018 2:57 pmIf pedantry is what you are after then you are also guilty of the same fallacy.
I'm not after pedantry, I'm not a philosopher, and it wouldn't be the same fallacy.
Machine learning algorithms do not have "agency". They never will. They have goals. Success/Failure criteria.
Well, now, yeah. A sufficiently advanced AI could develop learning algorithms with what could be called agency though.
Agency is not required for harm to occur. You are pre-supposing "malice". A benevolent AI is as dangerous to humans as a malevolent one.
I didn't pre-suppose malice, where did I write that?
A malevolent AI would most likely be more dangerous though than a benevolent one, obviously.
If they are not existential risks to HUMANITY they are not "major" problems. If you integrate the time-function of continued human existence you will saves more lives than preventing all human wars.

You too have fallen for the trap of temporal discounting.
I don't quite see how wiping out humanity isn't an existential threat to humanity. If we are extinct at time X, we are also extinct at time X+Y.
Objective morality exists. The no harm principle.
Of course no objective morality can ever exist, no idea what you're saying. But we can agree on a no harm principle.

Maybe if you would communicate your ideas in a saner way?

Re: What could make morality objective?

Posted: Tue Sep 18, 2018 3:24 pm
by TimeSeeker
Peter Holmes wrote: Tue Sep 18, 2018 3:15 pm To expand on what I said earlier: are any of the solutions to the so-called problems of meaning and reference themselves non-linguistic? Do you think we can get behind, beneath, through or beyond language by means of language - natural or formal?
Holy shit. More red herrings! FOCUS. There are problems and then there are PROBLEMS.
Peter Holmes wrote: Tue Sep 18, 2018 3:15 pm My interest in philosophy isn't solely academic - though I wouldn't be ashamed if it were. But you seem to be grinding some utterly irrelevant axe about how we or the machines we make do or will use language to destroy everything. Non-sequitur.
We are on a thread called "What could make morality objective?"
I am pointing to you that AI is an EXISTENTIAL THREAT to HUMANITY.

And you think it's a non-sequitur?

If you conception of "objective morality" has no overlap with "human survival" I think you've missed the forest for the trees.

I don't often go to such lengths - but people who think like that truly deserve to be expelled from society. You are stealing air.

Re: What could make morality objective?

Posted: Tue Sep 18, 2018 3:27 pm
by TimeSeeker
Atla wrote: Tue Sep 18, 2018 3:15 pm Well, now, yeah. A sufficiently advanced AI could develop learning algorithms with what could be called agency though.
Call it rainbow-farting Unicorns if you think it's so important. The fact that is is GOING TO KILL US. Is the part that matters most. Do you have Attention deficit disorder perhaps?
Atla wrote: Tue Sep 18, 2018 3:15 pm Of course no objective morality can ever exist, no idea what you're saying. But we can agree on a no harm principle.
I have given you an "objective morality" utility-function - human survival. Therefore preventing human harm/extinction is the No.1 priority for humanity. Therefore moral action is preventing extinction.

Do you disagree with it because it's not "objective" enough?

Otherwise your "belief in global warming" is immaterial. Yes - global warming is real. So what? It's not like we are optimising for survival or anything.

Re: What could make morality objective?

Posted: Tue Sep 18, 2018 3:44 pm
by Atla
TimeSeeker wrote: Tue Sep 18, 2018 3:27 pmCall it rainbow-farting Unicorns if you think it's so important. The fact that is is GOING TO KILL US. Is the part that matters most. Do you have Attention deficit disorder perhaps?
It's only going to kill us if we let it have the ability to kill us, and something goes wrong. AI is top 5 in my book too btw, danger-wise.

If you would communicate in a way that others can understand, maybe you would find that they actually have something to say about the topic.
I have given you an "objective morality" utility-function - human survival. Therefore preventing human harm/extinction is the No.1 priority for humanity. Therefore moral action is preventing extinction.

Do you disagree with it because it's not "objective" enough?
There is no such thing as objective morality. But imo preventing human harm/extinction is one of the best moralities humanity could try to agree on in the future, and then maybe pretend that it's objective.
Otherwise your "belief in global warming" is immaterial. Yes - global warming is real. So what? It's not like we are optimising for survival or anything.
How does this follow from the previous, and what do you mean by immaterial?

Btw I didn't mention global warming, and that's not an existential threat.

Re: What could make morality objective?

Posted: Tue Sep 18, 2018 3:49 pm
by TimeSeeker
Atla wrote: Tue Sep 18, 2018 3:44 pm It's only going to kill us if we let it have the ability to kill us, and something goes wrong. AI is top 5 in my book too btw, danger-wise.
How do you prevent it from having the ability to kill us? How do we exercise control?
Atla wrote: Tue Sep 18, 2018 3:44 pm There is no such thing as objective morality.
There we go again with the semantics again.
Yes - there is no such thing as objectivity.We made it up. There is no such thing as morality (right and wrong) either. We, humans, made it up too.
We can get away from all the bickering and suffice for me to point you to https://en.wikipedia.org/wiki/Construct ... istemology

But in the context of the INVENTED notions of "objectivity" and "morality", do you think avoiding extinction qualifies as "objective morality"? The most noble/important human pursuit? Or do you think there is something more important than that?
Atla wrote: Tue Sep 18, 2018 3:44 pm How does this follow from the previous, and what do you mean by immaterial?
Why would anyone care if global warming is "real" or "not real" if I they care about continued human survival?
Atla wrote: Tue Sep 18, 2018 3:44 pm Btw I didn't mention global warming, and that's not an existential threat.
You don't think the domino effect of global warming, might cause uncontrollable positive feedback loops ( https://en.wikipedia.org/wiki/Positive_feedback ) in the ecosystem leading to a systemic collapse of Earth's biosphere is an potential existential threat?

Re: What could make morality objective?

Posted: Tue Sep 18, 2018 4:04 pm
by Atla
TimeSeeker wrote: Tue Sep 18, 2018 3:49 pmHow do you prevent it from killing us?
By not giving it the tools/ability to kill us?
There we go again with the semantics again.
Yes - there is no such thing as objectivity.We made it up. There is no such thing as morality (right and wrong) either. We, humans, made it up too.
We can get away from all the bickering and suffice for me to point you to https://en.wikipedia.org/wiki/Construct ... istemology

But in the context of the INVENTED notions of "objectivity" and "morality", do you think avoiding extinction qualifies as "objective morality"? The most noble/important human pursuit? Or do you think there is something more important than that?
Why do you have such contempt for the "philosophers", when you keep twisting the semantics too like some of those philosophers?
Yes, if we pretend that there is objective morality, then survival/thriving is a good morality, OBVIOUSLY.
Why would anyone care if global warming is "real" or "not real" if I they care about continued human survival?
I don't understand.
You don't think the domino effect of global warming, might cause uncontrollable positive feedback loops ( https://en.wikipedia.org/wiki/Positive_feedback ) in the ecosystem leading to a systemic collapse of Earth's biosphere is an potential existential threat?
Pfft of course not, global warming by itself could never wipe out humanity. Over time it could wipe out the majority though.

Re: What could make morality objective?

Posted: Tue Sep 18, 2018 4:17 pm
by TimeSeeker
Atla wrote: Tue Sep 18, 2018 4:04 pm By not giving it the tools/ability to kill us?
HOW? People have been working on this for 20 years. Do you think they are idiots to not have thought of this sooner?
Atla wrote: Tue Sep 18, 2018 4:04 pm Why do you have such contempt for the "philosophers", when you keep twisting the semantics too like some of those philosophers?
Because philosophy is tool. To solve HUMAN, not linguistic/semantic problems. At least that's what I am using it for. And because I do believe in objective morality - I use the definition/meaning which points directly at the problem. The elephant in the room. Human extinction.
Atla wrote: Tue Sep 18, 2018 4:04 pm Yes, if we pretend that there is objective morality, then survival/thriving is a good morality, OBVIOUSLY.
OK. Do you want to pretend this is objective morality or do you have any objections to the conception?
Atla wrote: Tue Sep 18, 2018 4:04 pm I don't understand.
Do you care if global warming is real. And if yes - why do you care?
Atla wrote: Tue Sep 18, 2018 4:04 pm Pfft of course not, global warming by itself could never wipe out humanity. Over time it could wipe out the majority though.
Over what period of time do you think it can become an existential threat? Do you think the rate of change will remain linear, or could it become exponential ?

Re: What could make morality objective?

Posted: Tue Sep 18, 2018 4:29 pm
by Atla
TimeSeeker wrote: Tue Sep 18, 2018 4:17 pmHOW? People have been working on this for 20 years. Do you think they are idiots to not have thought of this sooner?
Are we forced to let advanced AI handle nuclear weapons and such?
Because philosophy is tool. To solve HUMAN problems. At least that's what I an using it for.
Well I think most people mean something more by philosophy.
OK. Do you want to pretend this is objective morality or do you have any objections to the conception?
I don't want to pretend it since I prefer clarity and we are on a philosophy forum.
Over what period of time do you think it can become an existential threat?
I think never. Global warming by itself could never wipe out humanity.
Which isn't to stay that we shouldn't try stop it immediately. For the maximum welfare for humanity and all life on the planet.

Re: What could make morality objective?

Posted: Tue Sep 18, 2018 4:34 pm
by TimeSeeker
Atla wrote: Tue Sep 18, 2018 4:29 pm Are we forced to let advanced AI handle nuclear weapons and such?
How do we prevent it from taking control away from humans? It doesn't even have to target nuclear weapons.
It can go for our infrastructure - power, water. Food supply.

We haven't figured out how to secure a single computer system.
Atla wrote: Tue Sep 18, 2018 4:29 pm Well I think most people mean something more by philosophy.
No idea what people think philosophy is or isn't.
No philosopher I've ever spoken to can give me a clear objective/success/failure criteria for "philosophy".
I can give you clear objective/success/failure criteria for science: prediction and control of the environment.
I can give you clear objective/success/failure criteria for morality also: avoiding human extinction

Do you want to play the "define extinction" game?
Atla wrote: Tue Sep 18, 2018 4:29 pm I don't want to pretend it since I prefer clarity and we are on a philosophy forum.
What are your objective success/failure criteria for "clarity" ? How do you know/verify/confirm/recognize that you have attained clarity?
Atla wrote: Tue Sep 18, 2018 4:29 pm I think never. Global warming by itself could never wipe out humanity.
Which isn't to stay that we shouldn't try stop it immediately. For the maximum welfare for humanity and all life on the planet.
I don't understand what you are saying. If global warming isn't a threat to humanity how is stopping global warming maximising welfare for humanity?

I also don't understand what you mean by "global warming by itself". It's part of a (very!) complex system that is the Eearth's biosphere. We neither understand how it works, nor have any control over it if it starts behaving in erratic manner.