Ad blocker detected: Our website is made possible by displaying online advertisements to our visitors. Disable your ad blocker to continue using our website.
The frame problem describes an issue with using first-order logic (FOL) to express facts about a robot in the world. Representing the state of a robot with traditional FOL requires the use of many axioms (symbolic language) to imply that things about an environment that do not change arbitrarily.
Nouvelle AI seeks to sidestep the frame problem by dispensing with filling the AI or robot with volumes of symbolic language and instead letting more complex behaviors emerge by combining simpler behavioral elements.
The frame problem describes an issue with using first-order logic (FOL) to express facts about a robot in the world. Representing the state of a robot with traditional FOL requires the use of many axioms (symbolic language) to imply that things about an environment that do not change arbitrarily.
Nouvelle AI seeks to sidestep the frame problem by dispensing with filling the AI or robot with volumes of symbolic language and instead letting more complex behaviors emerge by combining simpler behavioral elements.
One of the key aspects of doing this is a criterion measure of incremental improvement,
then deep learning might become effective.
Without a formal truth predicate we lack a definite criterion measure of correct and
without correct there can be no self-validation, so the system would not be able to
measure its own incremental improvements.
PeteOlcott wrote: ↑Wed Apr 03, 2019 11:42 pm
One of the key aspects of doing this is a criterion measure of incremental improvement,
then deep learning might become effective.
Deep learning still requires a training data-set. e.g something that has been pre-classified. Usually by a human.
PeteOlcott wrote: ↑Wed Apr 03, 2019 11:42 pm
Without a formal truth predicate we lack a definite criterion measure of correct and
without correct there can be no self-validation, so the system would not be able to
measure its own incremental improvements.
Precisely the problem of epistemic criterion. You need to define "correct" AND "incorrect" e.g "right" and "wrong" and that's the domain of moral philosophy not logic.
Positive and negative validation. Without ways to detect success and failure you can't build positive or negative feedback loops.
This ventures into system engineering - far beyond logic.
PeteOlcott wrote: ↑Wed Apr 03, 2019 11:42 pm
One of the key aspects of doing this is a criterion measure of incremental improvement,
then deep learning might become effective.
Deep learning still requires a training data-set. e.g something that has been pre-classified. Usually by a human.
PeteOlcott wrote: ↑Wed Apr 03, 2019 11:42 pm
Without a formal truth predicate we lack a definite criterion measure of correct and
without correct there can be no self-validation, so the system would not be able to
measure its own incremental improvements.
Precisely the problem of epistemic criterion. You need to define "correct" AND "incorrect" e.g "right" and "wrong" and that's the domain of moral philosophy not logic.
Positive and negative validation. Without ways to detect success and failure you can't build positive or negative feedback loops.
This ventures into system engineering - far beyond logic.
Not an expert system they are brittle. The Cyc project has over 700 labor years invested
in a single general purpose knowledge ontology.
Correct and incorrect as in arithmetic.
Is this sentence:
(1) Logically true.
(2) Logically false.
(3) else logically not true.
I have proved this sentence is not true: G ↔ ~(F ⊢ G)
It may also be not false. for it to be false its negation
must be true: G ↔ (F ⊢ G)
That seems to be the thing that conventional wisdom let slip
through the cracks: