Londoner wrote:[Of course if we assume various axioms then things will follow from those axioms. But we can do that for anything.
In fact, that's all we
ever do.
In maths and logic, we at least have a closed, self-referential, internally-coherent system of symbols with which to work. But not so in real life. In real life, all knowledge is merely empirical, and hence is only
probabilistic. And that's true of everything from the most raw superstition to the most rigorous science...it's all just a question of probabilities, and these probabilities are only known and once an original axiom has already been taken for granted.
To illustrate: the scientist working in his lab can never perform the complete set of possible experiments for an one phenomenon. That would be infinite. So he does only a set of, say, 10...but then he takes for granted that the next 100 or 1,000 or 1,000,000 tests would show the same results, and so says, "I have proved..." But he has not. Not if by "proved" we understand "shown to be beyond all possibility of error." Rather, he has merely created a very high probability estimation, one
very, very likely to turn out to be consistent, but not
certain to do so.
Moreover -- and here is the key point -- from where does he get the knowledge that science will "work" for him, or the knowledge that 10 tests will be confirmed by the next 10? He does not really have proof of either at all: rather, he takes them
assumptively, as foundational axioms, and presumes such things as that reality operates according to laws, that a controlled experiment can be reproduced, or that lab experiments will transfer to the larger world. And we consider him quite rational to do so.
The upshot, then, is this: it makes no sense if we indict
symbolic logic for being "unreliable" since it is quite as predictable as maths; and it makes no sense to indict
propositional logic for failing to be
absolutely certain. For both have met the standard of trustworthiness that an accurate view of them warrants --
rational consistency for the first, and
probabilistic certainty for the second. They've done what they should, in other words; and to demand more of them does not make them unreliable
per se -- it would simply indicate that the objector would have an unrealistic view of what both do.