Totality is 'inconsistent' consistently in that it is not 'closed'.Skepdick wrote: ↑Sun Jan 17, 2021 7:27 pmHigher up in your post you pointed out that "=" can (and does) mean multiple things. So, you are recognising that one symbol can mean two things which is the definition of equivocation. Or the definition of "polymorphism" - depending on which camp of understanding you are in.Scott Mayers wrote: ↑Sun Jan 17, 2021 7:09 pm (blah blah blah)
Either way, the Law of Identity is not about the symbols specifically but to the agreement in the MEANING of the symbols you choose.
(blah blah blah)
But you've pointed out the main reason why every logical system blows up. The tokenizer is the thing which assigns meaning to symbols, and in particular - the tokenizer assigns meaning to operators.
And if I simply come up with some new operation/operator (that is not representable/expressible in your current notation) ... well. Unexpected things happen.
Example: The LNC is often codified as ¬P ∧ P ⇔ False, but does ¬P ∧ P ⇔ False imply or necessitate P ∧ ¬P ⇔ False ? Could there be a system such that P ∧ ¬P ⇔ True but ¬P ∧ P ⇔ False ? Of course! Because nowhere in the LNC does it specify that the "∧" operator must commute.
ruby.png
And I should hope to learn what it means for a mechanically realizable and realised logical system to be "inconsistent".
It's defined as "inconsistent" even though it's consistent with reality. So now what?
As for symbol assignment, the laws of (all) logics are only about the meaning, not the particlar symbols they are FORCED to use when attempting to express anything.
Note that "inconsistency" means "inconsistency AND consistency" simultaneously of the same universe of discourse. A 'logic' addressing such would be "induction" as an example, to which science applies to. Our conscious existence is another example given we are a machine that is not 'complete'; contradictions are what MOTIVATES or redirects one towards some other direction. A 'closed' system simply rejects the contradiction. But an 'open' system (some define as 'informal logic') can be PARTIALLY closed with respect to a subset of its rules but permit non-closure, such as a command to act might represent when we use real human languages. For instance, normal propositional logic deals with closed sentences, where some particular grammar of human language uses but EXTENDS to open-ended communications, like, "Hey you, come here," can represent as being meaningful. The logic of such a sample command is open in that it the SATISFACTION of such an input statement is only 'closed' when you accept both the possiblity that the person one is asking to come to them OR doesn't come to them.
The difference relates to 'functions' versus 'relations'. We treat formal logics as 'functions' (one expected unique outcome per any set of inputs) versus, for example, a 'relation' that permits multiple outputs (any number more than one). You CAN define a partial or open system within a closed system but then would have to discretely separate the outputs such that the whole is a set of functions instead. We define a circle 'functionally' first in math as two functions combined. The top half, and the bottom half, when we use the Cartesian plane to define. The whole circle is no longer a 'function' yet still CLOSED in meaning by allowing for inclusion of 'relations' and not just 'functions'. As such, this shows a relative use of keeping those three universal laws to describe any system of reasoning, consistent or inconsistent. Notice that oddly here, 'functions', though 'complete' in form, is 'incomplete in that these are just one KIND of 'relation(ship)'. The 'relations', by contrast are normally treated as 'unsatisfactory' in that they have more than one outputs that can represent whole outputs that are MULTIVARIABLE in essence. Math in these cases utilizes both consistency and inconsistency by merely expressing things BY perspective consistencies that get combined.
I'm not sure I'm helping given your different background. But the laws of logic are sufficient to describe any system. The confusion I think that you and the OP may be having is to confuse how different languages opt to use different symbols. But that is mistaking the issue of 'spelling' conventions (or syntax) to the comprehension of the underlying logic that is universal. You can express the same MEANING of something using different languages (where equally complete or incomplete). The meaning is what the logic refers to but is forced to use some arbitrary convention of symbols. That arbitrariness of symbols is NOT what the laws of logic refer to. Those laws are 'semantic' laws, not 'syntax'.
Edit: Note that your given, (P=P)=(-P=-P) using the same sign HAPPENS to coincidentally be also true, by the way. But this is because it the MAIN comparitive (=) is true when it mean EITHER "evaluatively true" OR "logically true". But normally, logic expressions of the laws don't mix the two and so would always use different symbols for the middle term that I illustrated. When ONLY expressing the law with ONE comparative symbol, there is no confusion: P=P if stated for Identity, means the '=' is undertood to be the "logical equivalence".