Logik wrote: ↑Wed Apr 03, 2019 11:50 pm
You are building an expert system.
https://en.wikipedia.org/wiki/Expert_system
PeteOlcott wrote: ↑Wed Apr 03, 2019 11:42 pm
One of the key aspects of doing this is a criterion measure of incremental improvement,
then deep learning might become effective.
Deep learning still requires a training data-set. e.g something that has been pre-classified. Usually by a human.
PeteOlcott wrote: ↑Wed Apr 03, 2019 11:42 pm
Without a formal truth predicate we lack a definite criterion measure of correct and
without correct there can be no self-validation, so the system would not be able to
measure its own incremental improvements.
Precisely the problem of epistemic criterion. You need to define "correct" AND "incorrect" e.g "right" and "wrong" and that's the domain of moral philosophy not logic.
Positive and negative validation. Without ways to detect success and failure you can't build positive or negative feedback loops.
This ventures into system engineering - far beyond logic.
Not an expert system they are brittle. The Cyc project has over 700 labor years invested
in a single general purpose knowledge ontology.
Correct and incorrect as in arithmetic.
Is this sentence:
(1) Logically true.
(2) Logically false.
(3) else logically not true.
I have proved this sentence is not true: G ↔ ~(F ⊢ G)
It may also be not false. for it to be false its negation
must be true: G ↔ (F ⊢ G)
That seems to be the thing that conventional wisdom let slip
through the cracks:
THE TRUTH TELLER PARADOX
Chris MORTENSEN Graham PRIEST
http://virthost.vub.ac.be/lnaweb/ojs/in ... ew/939/774