The logical error of the Liar Paradox

What is the basis for reason? And mathematics?

Moderators: AMod, iMod

PeteOlcott
Posts: 1597
Joined: Mon Jul 25, 2016 6:55 pm

Re: The logical error of the Liar Paradox

Post by PeteOlcott »

Logik wrote: Wed Apr 03, 2019 10:16 pm
PeteOlcott wrote: Wed Apr 03, 2019 10:12 pm I "equivocate" truth essentially the same way that Kant does in his Critique of Pure Reason.
Leave Kant out of this. I am holding you accountable to the standards you've proposed and insist on.

They are your own criteria, not mine. Live up to them!

1 symbol - 1 meaning!

A globally-unique identifier for every semantic.
Knowledge is inherently divided up the way that Kant specified it.
I merely said the same thing using different words.
Logik
Posts: 4041
Joined: Tue Dec 04, 2018 12:48 pm

Re: The logical error of the Liar Paradox

Post by Logik »

PeteOlcott wrote: Wed Apr 03, 2019 10:26 pm Knowledge is inherently divided up the way that Kant specified it.
Lots of things have happened since Kant.

Most note-worthy being "Elephants don't play chess" and https://en.wikipedia.org/wiki/Nouvelle_AI

https://en.wikipedia.org/wiki/Nouvelle_ ... me_problem
The frame problem describes an issue with using first-order logic (FOL) to express facts about a robot in the world. Representing the state of a robot with traditional FOL requires the use of many axioms (symbolic language) to imply that things about an environment that do not change arbitrarily.

Nouvelle AI seeks to sidestep the frame problem by dispensing with filling the AI or robot with volumes of symbolic language and instead letting more complex behaviors emerge by combining simpler behavioral elements.
PeteOlcott
Posts: 1597
Joined: Mon Jul 25, 2016 6:55 pm

Re: The logical error of the Liar Paradox

Post by PeteOlcott »

Logik wrote: Wed Apr 03, 2019 10:34 pm
PeteOlcott wrote: Wed Apr 03, 2019 10:26 pm Knowledge is inherently divided up the way that Kant specified it.
Lots of things have happened since Kant.

Most note-worthy being "Elephants don't play chess" and https://en.wikipedia.org/wiki/Nouvelle_AI

https://en.wikipedia.org/wiki/Nouvelle_ ... me_problem
The frame problem describes an issue with using first-order logic (FOL) to express facts about a robot in the world. Representing the state of a robot with traditional FOL requires the use of many axioms (symbolic language) to imply that things about an environment that do not change arbitrarily.

Nouvelle AI seeks to sidestep the frame problem by dispensing with filling the AI or robot with volumes of symbolic language and instead letting more complex behaviors emerge by combining simpler behavioral elements.
http://www.cyc.com/documentation/ontologists-handbook/
My key goal is to figure out how to auto populate a knowledge ontology like Cyc.

One of the key aspects of doing this is a criterion measure of incremental improvement,
then deep learning might become effective.

Without a formal truth predicate we lack a definite criterion measure of correct and
without correct there can be no self-validation, so the system would not be able to
measure its own incremental improvements.
Logik
Posts: 4041
Joined: Tue Dec 04, 2018 12:48 pm

Re: The logical error of the Liar Paradox

Post by Logik »

PeteOlcott wrote: Wed Apr 03, 2019 11:42 pm http://www.cyc.com/documentation/ontologists-handbook/
My key goal is to figure out how to auto populate a knowledge ontology like Cyc.
You are building an expert system. https://en.wikipedia.org/wiki/Expert_system
PeteOlcott wrote: Wed Apr 03, 2019 11:42 pm One of the key aspects of doing this is a criterion measure of incremental improvement,
then deep learning might become effective.
Deep learning still requires a training data-set. e.g something that has been pre-classified. Usually by a human.
PeteOlcott wrote: Wed Apr 03, 2019 11:42 pm Without a formal truth predicate we lack a definite criterion measure of correct and
without correct there can be no self-validation, so the system would not be able to
measure its own incremental improvements.
Precisely the problem of epistemic criterion. You need to define "correct" AND "incorrect" e.g "right" and "wrong" and that's the domain of moral philosophy not logic.

Positive and negative validation. Without ways to detect success and failure you can't build positive or negative feedback loops.

This ventures into system engineering - far beyond logic.
PeteOlcott
Posts: 1597
Joined: Mon Jul 25, 2016 6:55 pm

Re: The logical error of the Liar Paradox

Post by PeteOlcott »

Logik wrote: Wed Apr 03, 2019 11:50 pm
PeteOlcott wrote: Wed Apr 03, 2019 11:42 pm http://www.cyc.com/documentation/ontologists-handbook/
My key goal is to figure out how to auto populate a knowledge ontology like Cyc.
You are building an expert system. https://en.wikipedia.org/wiki/Expert_system
PeteOlcott wrote: Wed Apr 03, 2019 11:42 pm One of the key aspects of doing this is a criterion measure of incremental improvement,
then deep learning might become effective.
Deep learning still requires a training data-set. e.g something that has been pre-classified. Usually by a human.
PeteOlcott wrote: Wed Apr 03, 2019 11:42 pm Without a formal truth predicate we lack a definite criterion measure of correct and
without correct there can be no self-validation, so the system would not be able to
measure its own incremental improvements.
Precisely the problem of epistemic criterion. You need to define "correct" AND "incorrect" e.g "right" and "wrong" and that's the domain of moral philosophy not logic.

Positive and negative validation. Without ways to detect success and failure you can't build positive or negative feedback loops.

This ventures into system engineering - far beyond logic.
Not an expert system they are brittle. The Cyc project has over 700 labor years invested
in a single general purpose knowledge ontology.

Correct and incorrect as in arithmetic.
Is this sentence:
(1) Logically true.
(2) Logically false.
(3) else logically not true.

I have proved this sentence is not true: G ↔ ~(F ⊢ G)
It may also be not false. for it to be false its negation
must be true: G ↔ (F ⊢ G)

That seems to be the thing that conventional wisdom let slip
through the cracks:

THE TRUTH TELLER PARADOX
Chris MORTENSEN Graham PRIEST
http://virthost.vub.ac.be/lnaweb/ojs/in ... ew/939/774
Post Reply