lennartack wrote:
We're only philosophizing, not designing the program! It would certainly be difficult, but that makes it even more interesting.
Hey? We are philosophising about the possibility that a computer could be used to decide on moral issues.
That would have to include asking the question if such a program were possible.
..... obviously
You haven't given any reasons why it's not possible.
Why it is possible: when we are given a moral problem we decide based on rules we know intuitively or have learned. If we can use these rules to decide on the problem, it's also possible to insert the rules in a computer and let the computer decide.
The hardest problem is setting up the rules. Of course a computer can't (yet) reason about very intrinsic values like happiness, but it can calculate "happiness values" of real life situations, based on the rules we insert. This will give an objective way of deciding people's happiness. When a moral problem is given, the computer could choose the option where the most happiness points are won with a minimal hapiness loss. You could tell the computer to look at collective happiness only, or at individual happiness and not allow happiness loss for individuals, or both.
I imagine I'd set up my personal ethical assistant to multiply individual happiness values with a factor based on how close they are to me. I would give myself the highest factor. Then my friends and family, then the rest of the world. When the computer decides an a problem, personal gains will be more important than gains for people I don't know.
lennartack wrote:I imagine I'd set up my personal ethical assistant to multiply individual happiness values with a factor based on how close they are to me. I would give myself the highest factor. Then my friends and family, then the rest of the world. When the computer decides an a problem, personal gains will be more important than gains for people I don't know.
lennartack wrote:
We're only philosophizing, not designing the program! It would certainly be difficult, but that makes it even more interesting.
Hey? We are philosophising about the possibility that a computer could be used to decide on moral issues.
That would have to include asking the question if such a program were possible.
..... obviously
You haven't given any reasons why it's not possible.
Why it is possible: when we are given a moral problem we decide based on rules we know intuitively or have learned. If we can use these rules to decide on the problem, it's also possible to insert the rules in a computer and let the computer decide.
Computers deal in numbers not in concepts.
Have you ever programmed a computer?
What languages have you used?
chaz wyman wrote:
Hey? We are philosophising about the possibility that a computer could be used to decide on moral issues.
That would have to include asking the question if such a program were possible.
..... obviously
You haven't given any reasons why it's not possible.
Why it is possible: when we are given a moral problem we decide based on rules we know intuitively or have learned. If we can use these rules to decide on the problem, it's also possible to insert the rules in a computer and let the computer decide.
Computers deal in numbers not in concepts.
Have you ever programmed a computer?
What languages have you used?
C, C++, java, python, PHP, and more. You? Let me guess, you did Pascal when at university?
lennartack wrote:
You haven't given any reasons why it's not possible.
Why it is possible: when we are given a moral problem we decide based on rules we know intuitively or have learned. If we can use these rules to decide on the problem, it's also possible to insert the rules in a computer and let the computer decide.
Computers deal in numbers not in concepts.
Have you ever programmed a computer?
What languages have you used?
C, C++, java, python, PHP, and more. You? Let me guess, you did Pascal when at university?
Yes, and BASIC , Logo, and COBOL.
You ought to know better. Either you are lying about your languages, or are naive about morality.
chaz wyman wrote:
Computers deal in numbers not in concepts.
Have you ever programmed a computer?
What languages have you used?
C, C++, java, python, PHP, and more. You? Let me guess, you did Pascal when at university?
Yes, and BASIC , Logo, and COBOL.
You ought to know better. Either you are lying about your languages, or are naive about morality.
Let me introduce a third possibility: you do not understand what we mean by this computer. A computer can't reason about intrinsic values like happinness, but we can try to quantize happinness and calculate (expected and estimated) happinness values based on the rules we give the computer.
Maybe I am naive about morality, enlighten me!
As a side note, do you think all computer scientists who think AI might reach the level of human reasoning "ought to know better"? Personally, I don't know, but I do not agree with the arguments based on Gödel's incompleteness theorems. Do you know a better argument? (an argument based on programming languages won't count, it will ofcourse not be a programmed computer)
lennartack wrote:
C, C++, java, python, PHP, and more. You? Let me guess, you did Pascal when at university?
Yes, and BASIC , Logo, and COBOL.
You ought to know better. Either you are lying about your languages, or are naive about morality.
Let me introduce a third possibility: you do not understand what we mean by this computer. A computer can't reason about intrinsic values like happinness, but we can try to quantize happinness and calculate (expected and estimated) happinness values based on the rules we give the computer.
Maybe I am naive about morality, enlighten me!
As a side note, do you think all computer scientists who think AI might reach the level of human reasoning "ought to know better"? Personally, I don't know, but I do not agree with the arguments based on Gödel's incompleteness theorems. Do you know a better argument? (an argument based on programming languages won't count, it will ofcourse not be a programmed computer)
Yes, I get all you say.
A moralising computer would be absurd.
chaz wyman wrote:
A moralising computer would be absurd.
So you have changed your position from "not possible" to "absurd"? In any way, a convincing argument is still lacking.
chaz wyman wrote:
Felasco wrote:
How can you measure happiness?
This has been done. A study in France looked at the brain of many people reporting experiences of happiness, and established reliable patterns.
Then they measured the same brain activity in a Buddhist monk, and he was off the scale, showing unprecedented levels of happiness.
Which is of course an empirical study which conclusively proves that religion is wrong, Wrong, WRONG!!! just like Chaz has been saying all along.
Degrees of happiness do not demonstrate the truth or falsity of any religious claim.
Your posts are getting increasingly ridiculous.
If you read his post again you will see that he made no claims about the truth or falsity of any religion.
The argument is this:
1. Things that make people happy are good for these people (definition)
2. Buddhism makes people happy
3. Buddhism is good for the people it makes happy
This argument is valid unless you have another definition of good and bad.
lennartack wrote:
So you have changed your position from "not possible" to "absurd"? In any way, a convincing argument is still lacking.
...
The argument is this:
1. Things that make people happy are good for these people (definition)
2. Buddhism makes people happy
3. Buddhism is good for the people it makes happy
This argument is valid unless you have another definition of good and bad.
The above is not computer code.
A computer coded with the above information would advise all people to be Buddhists.
That is absurd.
What I mean by not possible, is that a computer can never 'understand' morality, and making such a computer would be absurd, as the results would be absurd.
Last edited by chaz wyman on Fri Jan 11, 2013 4:58 pm, edited 1 time in total.
chaz wyman wrote:
A moralising computer would be absurd.
So you have changed your position from "not possible" to "absurd"? In any way, a convincing argument is still lacking.
chaz wyman wrote:
Felasco wrote:
Which is of course an empirical study which conclusively proves that religion is wrong, Wrong, WRONG!!! just like Chaz has been saying all along.
Degrees of happiness do not demonstrate the truth or falsity of any religious claim.
Your posts are getting increasingly ridiculous.
If you read his post again you will see that he made no claims about the truth or falsity of any religion.
.
And if you had taken the trouble to follow the thread, you would notive that the statement was not directed at you but at Felasco.
lennartack wrote:
So you have changed your position from "not possible" to "absurd"? In any way, a convincing argument is still lacking.
...
The argument is this:
1. Things that make people happy are good for these people (definition)
2. Buddhism makes people happy
3. Buddhism is good for the people it makes happy
This argument is valid unless you have another definition of good and bad.
The above is not computer code.
A computer coded with the above information would advise all people to be Buddhists.
That is absurd.
What I mean by not possible, is that a computer can never 'understand' morality, and making such a computer would be absurd, as the results would be absurd.
Hah, as if human moral reasoning is not absurd. Look at all the misery in the world cause by man's moral reasoning.
I do not understand why a computer will need to understand morality to produce proper results. Most people do not understand morality. They reason according to dogmatic rules they have learned but do not understand at all. I am sure you agree with me when these are religious moral rules, like "you can't have sex before marriage".
I do believe that initially results will be absurd, but that would be caused by errors or incompleteness of the rules we have given the computer and can therefore be solved.
chaz wyman wrote:
And if you had taken the trouble to follow the thread, you would notive that the statement was not directed at you but at Felasco.
Yes..? Please respond to it, regardless to whom your statement was directed.
lennartack wrote:
So you have changed your position from "not possible" to "absurd"? In any way, a convincing argument is still lacking.
...
The argument is this:
1. Things that make people happy are good for these people (definition)
2. Buddhism makes people happy
3. Buddhism is good for the people it makes happy
This argument is valid unless you have another definition of good and bad.
The above is not computer code.
A computer coded with the above information would advise all people to be Buddhists.
That is absurd.
What I mean by not possible, is that a computer can never 'understand' morality, and making such a computer would be absurd, as the results would be absurd.
Hah, as if human moral reasoning is not absurd. Look at all the misery in the world cause by man's moral reasoning.
Indeed, and here you have shot yourself in the foot. No computer will ever satisfy human reasoning on the matter of moral concerns. They are great a crunching numbers, but their results always follow the assumptions of the programmer, and as humans always come along with a range of different moral assumptions, this is why a computer would only ever be great for one person - who could just as well decide for himself, rather than given the job to his personal computer.
Computers deal with quantity. Moral variables deal with quality.
I do not understand why a computer will need to understand morality to produce proper results.
Dah!
Most people do not understand morality. They reason according to dogmatic rules they have learned but do not understand at all. I am sure you agree with me when these are religious moral rules, like "you can't have sex before marriage".
And are you going to tell your computer that this is a rule?
I do believe that initially results will be absurd, but that would be caused by errors or incompleteness of the rules we have given the computer and can therefore be solved.
Please try to grasp the difference between qualitative results and quantitative results.
chaz wyman wrote:
And if you had taken the trouble to follow the thread, you would notive that the statement was not directed at you but at Felasco.
Yes..? Please respond to it, regardless to whom your statement was directed.