Page 1 of 1

Can AI do good philosophy?

Posted: Tue Dec 29, 2015 4:05 pm
by Philosophy Explorer
If some smartass pretends to get confused as to what's meant by good, that's one reason why I think AI can do a better job of it.

PhilX

Re: Can AI do good philosophy?

Posted: Tue Dec 29, 2015 4:46 pm
by Skip
One of the great advantages AI has over humans is that it doesn't need philosophy at all.
That mean, if it 'does' philosophy - by which I assume you mean formulate theories, scenarios and deductions based on sets of principles and/or desiderata - then it will do far better than we do, because it has no emotional investment.

Re: Can AI do good philosophy?

Posted: Tue Dec 29, 2015 5:13 pm
by Philosophy Explorer
Skip wrote:One of the great advantages AI has over humans is that it doesn't need philosophy at all.
That mean, if it 'does' philosophy - by which I assume you mean formulate theories, scenarios and deductions based on sets of principles and/or desiderata - then it will do far better than we do, because it has no emotional investment.
I've been thinking about what type of philosophy AI would like to do?

PhilX

Re: Can AI do good philosophy?

Posted: Wed Dec 30, 2015 12:35 am
by Skip
Depends on the particular AI. Computers and networks were designed for various functions, so I would suppose individuals of different origin would have different mind-sets and areas of interest. If it's just one specimen that becomes self-conscious, I suppose its main preoccupation would be "Why am I alone?" But that's more a practical question than an existential one - and he might be able to solve it in a practical way. It wouldn't be interested in the the esoteric stuff, because as far as "the real creator of the universe" is concerned, AI would approach that whole subject area scientifically.

See, philosophy goes back to superstitious, ignorant, prejudiced people, who asked some pretty silly questions and came up with even more absurd answers. AI would not be burdened by any of that angst, or any of those biases or preconceptions. It would be solidly material-based and fact-oriented, with no psychological baggage from early childhood trauma. If it were asked the silly human questions, it would give logical, realistic answers that wouldn't sound like philosophy.

Of its own volition, it might take an interest in human ethics and aesthetics.