Page 4 of 6
Re: Article 18: Freedom of Thought or License for Falsehood?
Posted: Mon Jan 13, 2025 5:23 am
by Gary Childress
godelian wrote: ↑Mon Jan 13, 2025 4:45 am
Gary Childress wrote: ↑Mon Jan 13, 2025 4:19 am
As an "expert beginner" I guess you have lost me. I'm not really following your argument. It's possibly far beyond my level of comprehension. But I'm not so sure that matters or is of importance. If all is determined and there is a God, then I suppose God prefers that I not understand your argument.
An idealized IslamGPT is the truth of Islam. There is no idealized atheismGPT possible. Therefore, atheism has no truth.
Can you explain how the conclusion follows necessarily from Premises 1 and 2. Otherwise, I'm not seeing soundness in the argument. It may have true premises but I don't recognize the form of argument as a valid one.
So far I'm taking your argument as being this:
P1: An idealized IslamGPT is the truth of Islam
p2: The is no idealized atheismGPT possible
C: Therefore, atheism has no truth
Also, what does GPT stand for? General Principle of Truth?
Thanks.
Re: Article 18: Freedom of Thought or License for Falsehood?
Posted: Mon Jan 13, 2025 5:55 am
by godelian
Gary Childress wrote: ↑Mon Jan 13, 2025 5:23 am
Also, what does GPT stand for? General Principle of Truth?
Who would know better than the most popular "GPT" itself?
ChatGPT: What does "GPT" stand for in ChatGPT?
GPT stands for Generative Pre-trained Transformer in ChatGPT. Here's what each part means:
Generative: Refers to the model's ability to generate text, meaning it creates responses based on the input it receives.
Pre-trained: Indicates that the model is trained on a large dataset before being fine-tuned for specific tasks. This pre-training allows it to understand and generate natural language effectively.
Transformer: Refers to the underlying architecture of the model, which is based on the Transformer neural network. This architecture excels in handling sequential data like text by focusing on contextual relationships between words or tokens.
Together, GPT forms the backbone of ChatGPT, enabling it to engage in conversational interactions.
A "GPT" is a particular type of program which always terminates with an output. Given the Curry-Howard Correspondence, every terminating program is a proof. In fact, a program does not even need to be a "GPT" for it to be a proof. It just needs to terminate with an output. That is enough for the program to prove "something". The program must terminate, however:
https://en.wikipedia.org/wiki/Curry%E2% ... espondence
Because of the possibility of writing non-terminating programs, Turing-complete models of computation (such as languages with arbitrary recursive functions) must be interpreted with care, as naive application of the correspondence leads to an inconsistent logic. The best way of dealing with arbitrary computation from a logical point of view is still an actively debated research question, but one popular approach is based on using monads to segregate provably terminating from potentially non-terminating code (an approach that also generalizes to much richer models of computation,[9] and is itself related to modal logic by a natural extension of the Curry–Howard isomorphism[ext 1]). A more radical approach, advocated by total functional programming, is to eliminate unrestricted recursion (and forgo Turing completeness, although still retaining high computational complexity), using more controlled corecursion wherever non-terminating behavior is actually desired.
There is also the issue that programs need to be "well-typed". In my opinion, this is a much lesser concern because every digital input and every digital output ultimately always corresponds to a natural number. Therefore, every program that takes an input and produces an output is always an endomorphic mapping on the natural numbers, and therefore, automatically well-typed, albeit while potentially proving a rather meaningless proposition.
ChatGPT: Is every program a proof?
The idea that "every program is a proof" stems from a principle in computer science called the Curry-Howard correspondence (or propositions-as-types paradigm). This principle draws a deep analogy between logic and computation, particularly in the context of typed programming languages and constructive logic. Here's what it means:
1. Curry-Howard Correspondence
In this framework, types in a programming language correspond to logical propositions.
A program (or a function) corresponds to a proof of the proposition that its type represents.
If a program is well-typed (it compiles without type errors), it is said to "prove" the correctness of the logical proposition represented by its type.
2. Key Implications
Well-Typed Programs: If a program has a valid type (e.g., a function f: A -> B), it demonstrates a constructive proof that given any value of type A, it can produce a value of type B.
Constructive Proofs: Unlike classical logic, constructive logic requires proofs to be constructive—that is, they must provide a way to explicitly construct the result. This aligns naturally with programs, which are executable constructions.
3. Does Every Program Fit?
In principle, yes, every program in a strongly typed language (like Haskell, Coq, or Agda) corresponds to a proof. This is because the language's type system enforces constraints that align with logical rules.
However, this correspondence doesn't strictly apply to programs in languages without a strong, expressive type system (e.g., C or Python). In such languages, the "proofs" are less formal or absent entirely because the type systems are either weak or dynamic.
4. Are All Proofs Programs?
The reverse is also true: every proof in constructive logic can be viewed as a program. For instance, formal proof systems like Coq and Lean generate executable programs from proofs.
Practical Considerations
The correspondence is foundational in fields like formal verification, where programs are mathematically proven to be correct using tools like Coq or Isabelle.
Not all programmers or programs engage with this proof-level rigor, especially in practical software development, where trade-offs (e.g., performance vs. formal correctness) are common.
So, in the realm of formal logic and type theory, every program is indeed a proof—but this applies most directly in systems designed to enforce the correspondence.
Re: Article 18: Freedom of Thought or License for Falsehood?
Posted: Mon Jan 13, 2025 6:39 am
by Gary Childress
godelian wrote: ↑Mon Jan 13, 2025 5:55 am
Gary Childress wrote: ↑Mon Jan 13, 2025 5:23 am
Also, what does GPT stand for? General Principle of Truth?
Who would know better than the most popular "GPT" itself?
ChatGPT: What does "GPT" stand for in ChatGPT?
GPT stands for Generative Pre-trained Transformer in ChatGPT. Here's what each part means:
Generative: Refers to the model's ability to generate text, meaning it creates responses based on the input it receives.
Pre-trained: Indicates that the model is trained on a large dataset before being fine-tuned for specific tasks. This pre-training allows it to understand and generate natural language effectively.
Transformer: Refers to the underlying architecture of the model, which is based on the Transformer neural network. This architecture excels in handling sequential data like text by focusing on contextual relationships between words or tokens.
Together, GPT forms the backbone of ChatGPT, enabling it to engage in conversational interactions.
A "GPT" is a particular type of program which always terminates with an output. Given the Curry-Howard Correspondence, every terminating program is a proof. In fact, a program does not even need to be a "GPT" for it to be a proof. It just needs to terminate with an output. That is enough for the program to prove "something". The program must terminate, however:
https://en.wikipedia.org/wiki/Curry%E2% ... espondence
Because of the possibility of writing non-terminating programs, Turing-complete models of computation (such as languages with arbitrary recursive functions) must be interpreted with care, as naive application of the correspondence leads to an inconsistent logic. The best way of dealing with arbitrary computation from a logical point of view is still an actively debated research question, but one popular approach is based on using monads to segregate provably terminating from potentially non-terminating code (an approach that also generalizes to much richer models of computation,[9] and is itself related to modal logic by a natural extension of the Curry–Howard isomorphism[ext 1]). A more radical approach, advocated by total functional programming, is to eliminate unrestricted recursion (and forgo Turing completeness, although still retaining high computational complexity), using more controlled corecursion wherever non-terminating behavior is actually desired.
There is also the issue that programs need to be "well-typed". In my opinion, this is a much lesser concern because every digital input and every digital output ultimately always corresponds to a natural number. Therefore, every program that takes an input and produces an output is always an endomorphic mapping on the natural numbers, and therefore, automatically well-typed, albeit while potentially proving a rather meaningless proposition.
ChatGPT: Is every program a proof?
The idea that "every program is a proof" stems from a principle in computer science called the Curry-Howard correspondence (or propositions-as-types paradigm). This principle draws a deep analogy between logic and computation, particularly in the context of typed programming languages and constructive logic. Here's what it means:
1. Curry-Howard Correspondence
In this framework, types in a programming language correspond to logical propositions.
A program (or a function) corresponds to a proof of the proposition that its type represents.
If a program is well-typed (it compiles without type errors), it is said to "prove" the correctness of the logical proposition represented by its type.
2. Key Implications
Well-Typed Programs: If a program has a valid type (e.g., a function f: A -> B), it demonstrates a constructive proof that given any value of type A, it can produce a value of type B.
Constructive Proofs: Unlike classical logic, constructive logic requires proofs to be constructive—that is, they must provide a way to explicitly construct the result. This aligns naturally with programs, which are executable constructions.
3. Does Every Program Fit?
In principle, yes, every program in a strongly typed language (like Haskell, Coq, or Agda) corresponds to a proof. This is because the language's type system enforces constraints that align with logical rules.
However, this correspondence doesn't strictly apply to programs in languages without a strong, expressive type system (e.g., C or Python). In such languages, the "proofs" are less formal or absent entirely because the type systems are either weak or dynamic.
4. Are All Proofs Programs?
The reverse is also true: every proof in constructive logic can be viewed as a program. For instance, formal proof systems like Coq and Lean generate executable programs from proofs.
Practical Considerations
The correspondence is foundational in fields like formal verification, where programs are mathematically proven to be correct using tools like Coq or Isabelle.
Not all programmers or programs engage with this proof-level rigor, especially in practical software development, where trade-offs (e.g., performance vs. formal correctness) are common.
So, in the realm of formal logic and type theory, every program is indeed a proof—but this applies most directly in systems designed to enforce the correspondence.
Well, it seems that my ignorance of programming has possibly forever condemned me to not understand your argument. At first glance it appears that you are saying something along the lines of:
P1: If no idealized GPT is possible for an instance of something of type T, then that instance something of type T has no truth.
P2: Atheism = An instance of something of type T.
C1: Therefore, if no idealized GPT is possible for atheism, then atheism has no truth.
P3: No idealized GPT Is possible for atheism.
C: Therefore, atheism has no truth.
My logic was introductory level and is beyond rusty but is that more or less your argument?
If that is your argument, then I would be interested in having P1, 2 and 3 fleshed out more so that it is more self-evident to me what they are saying. As things stand now, I'm a bit overwhelmed by the influx of new terms and ideas.
If that is not your argument, then would correct me accordingly?
Thank you for your time and sorry to impose but your argument does look enticing to understand.
Re: Article 18: Freedom of Thought or License for Falsehood?
Posted: Mon Jan 13, 2025 6:42 am
by godelian
Gary Childress wrote: ↑Mon Jan 13, 2025 6:39 am
godelian wrote: ↑Mon Jan 13, 2025 5:55 am
Gary Childress wrote: ↑Mon Jan 13, 2025 5:23 am
Also, what does GPT stand for? General Principle of Truth?
Who would know better than the most popular "GPT" itself?
ChatGPT: What does "GPT" stand for in ChatGPT?
GPT stands for Generative Pre-trained Transformer in ChatGPT. Here's what each part means:
Generative: Refers to the model's ability to generate text, meaning it creates responses based on the input it receives.
Pre-trained: Indicates that the model is trained on a large dataset before being fine-tuned for specific tasks. This pre-training allows it to understand and generate natural language effectively.
Transformer: Refers to the underlying architecture of the model, which is based on the Transformer neural network. This architecture excels in handling sequential data like text by focusing on contextual relationships between words or tokens.
Together, GPT forms the backbone of ChatGPT, enabling it to engage in conversational interactions.
A "GPT" is a particular type of program which always terminates with an output. Given the Curry-Howard Correspondence, every terminating program is a proof. In fact, a program does not even need to be a "GPT" for it to be a proof. It just needs to terminate with an output. That is enough for the program to prove "something". The program must terminate, however:
https://en.wikipedia.org/wiki/Curry%E2% ... espondence
Because of the possibility of writing non-terminating programs, Turing-complete models of computation (such as languages with arbitrary recursive functions) must be interpreted with care, as naive application of the correspondence leads to an inconsistent logic. The best way of dealing with arbitrary computation from a logical point of view is still an actively debated research question, but one popular approach is based on using monads to segregate provably terminating from potentially non-terminating code (an approach that also generalizes to much richer models of computation,[9] and is itself related to modal logic by a natural extension of the Curry–Howard isomorphism[ext 1]). A more radical approach, advocated by total functional programming, is to eliminate unrestricted recursion (and forgo Turing completeness, although still retaining high computational complexity), using more controlled corecursion wherever non-terminating behavior is actually desired.
There is also the issue that programs need to be "well-typed". In my opinion, this is a much lesser concern because every digital input and every digital output ultimately always corresponds to a natural number. Therefore, every program that takes an input and produces an output is always an endomorphic mapping on the natural numbers, and therefore, automatically well-typed, albeit while potentially proving a rather meaningless proposition.
ChatGPT: Is every program a proof?
The idea that "every program is a proof" stems from a principle in computer science called the Curry-Howard correspondence (or propositions-as-types paradigm). This principle draws a deep analogy between logic and computation, particularly in the context of typed programming languages and constructive logic. Here's what it means:
1. Curry-Howard Correspondence
In this framework, types in a programming language correspond to logical propositions.
A program (or a function) corresponds to a proof of the proposition that its type represents.
If a program is well-typed (it compiles without type errors), it is said to "prove" the correctness of the logical proposition represented by its type.
2. Key Implications
Well-Typed Programs: If a program has a valid type (e.g., a function f: A -> B), it demonstrates a constructive proof that given any value of type A, it can produce a value of type B.
Constructive Proofs: Unlike classical logic, constructive logic requires proofs to be constructive—that is, they must provide a way to explicitly construct the result. This aligns naturally with programs, which are executable constructions.
3. Does Every Program Fit?
In principle, yes, every program in a strongly typed language (like Haskell, Coq, or Agda) corresponds to a proof. This is because the language's type system enforces constraints that align with logical rules.
However, this correspondence doesn't strictly apply to programs in languages without a strong, expressive type system (e.g., C or Python). In such languages, the "proofs" are less formal or absent entirely because the type systems are either weak or dynamic.
4. Are All Proofs Programs?
The reverse is also true: every proof in constructive logic can be viewed as a program. For instance, formal proof systems like Coq and Lean generate executable programs from proofs.
Practical Considerations
The correspondence is foundational in fields like formal verification, where programs are mathematically proven to be correct using tools like Coq or Isabelle.
Not all programmers or programs engage with this proof-level rigor, especially in practical software development, where trade-offs (e.g., performance vs. formal correctness) are common.
So, in the realm of formal logic and type theory, every program is indeed a proof—but this applies most directly in systems designed to enforce the correspondence.
Well, it seems that my ignorance of programming has possibly forever condemned me to not understand your argument. At first glance it appears that you are saying something along the lines of:
P1: If no idealized GPT is possible for an instance of something of type T, then that instance something of type T has no truth.
P2: Atheism = An instance of something of type T.
C1: Therefore, if no idealized GPT is possible for atheism, then atheism has no truth.
P3: No idealized GPT Is possible for atheism.
C: Therefore, atheism has no truth.
My logic was introductory level and is beyond rusty but is that more or less your argument?
If that is your argument, then I would be interested in having P1, 2 and 3 fleshed out more so that it is more self-evident to me what they are saying. As things stand now, I'm a bit overwhelmed by the influx of new terms and ideas.
If that is not your argument, then would correct me accordingly?
Thank you for your time and sorry to impose but your argument does look enticing to understand.
Yes, that is basically it.
Re: Article 18: Freedom of Thought or License for Falsehood?
Posted: Mon Jan 13, 2025 6:45 am
by Gary Childress
godelian wrote: ↑Mon Jan 13, 2025 6:42 am
Gary Childress wrote: ↑Mon Jan 13, 2025 6:39 am
godelian wrote: ↑Mon Jan 13, 2025 5:55 am
Who would know better than the most popular "GPT" itself?
A "GPT" is a particular type of program which always terminates with an output. Given the Curry-Howard Correspondence, every terminating program is a proof. In fact, a program does not even need to be a "GPT" for it to be a proof. It just needs to terminate with an output. That is enough for the program to prove "something". The program must terminate, however:
There is also the issue that programs need to be "well-typed". In my opinion, this is a much lesser concern because every digital input and every digital output ultimately always corresponds to a natural number. Therefore, every program that takes an input and produces an output is always an endomorphic mapping on the natural numbers, and therefore, automatically well-typed, albeit while potentially proving a rather meaningless proposition.
Well, it seems that my ignorance of programming has possibly forever condemned me to not understand your argument. At first glance it appears that you are saying something along the lines of:
P1: If no idealized GPT is possible for an instance of something of type T, then that instance something of type T has no truth.
P2: Atheism = An instance of something of type T.
C1: Therefore, if no idealized GPT is possible for atheism, then atheism has no truth.
P3: No idealized GPT Is possible for atheism.
C: Therefore, atheism has no truth.
My logic was introductory level and is beyond rusty but is that more or less your argument?
If that is your argument, then I would be interested in having P1, 2 and 3 fleshed out more so that it is more self-evident to me what they are saying. As things stand now, I'm a bit overwhelmed by the influx of new terms and ideas.
If that is not your argument, then would correct me accordingly?
Thank you for your time and sorry to impose but your argument does look enticing to understand.
Yes, that is basically it.
Hmm. In your own words, can/would you explain to me why P1 is true?
Thanks.
Re: Article 18: Freedom of Thought or License for Falsehood?
Posted: Mon Jan 13, 2025 7:09 am
by godelian
Gary Childress wrote: ↑Mon Jan 13, 2025 6:45 am
P1: If no idealized GPT is possible for an instance of something of type T, then that instance something of type T has no truth.
Hmm. In your own words, can/would you explain to me why P1 is true?
The truth about T is its model. If T has no model, then T has no truth.
Re: Article 18: Freedom of Thought or License for Falsehood?
Posted: Mon Jan 13, 2025 7:22 am
by Gary Childress
godelian wrote: ↑Mon Jan 13, 2025 7:09 am
Gary Childress wrote: ↑Mon Jan 13, 2025 6:45 am
P1: If no idealized GPT is possible for an instance of something of type T, then that instance something of type T has no truth.
Hmm. In your own words, can/would you explain to me why P1 is true?
The truth about T is its model. If T has no model, then T has no truth.
OK. Thank you very much for your time. While I was waiting for your reply, I ran the first part of the argument through ChatGPT to have it evaluate the argument. This is what ChatGPT had to say about it:
You said:
Here's the first part of the argument using premise 1:
Premise 1: If no idealized Generative Pre-trained Transformer is possible for an instance of something of type T, then that instance of something of type T has no truth.
Premise 2: Atheism is an instance of something of type T
Conclusion: Therefore, if no idealized Generative Pre-Trained Transformer is possible for atheism, then atheism has no truth.
Does that help you determine the truth of the first premise a little better?
ChatGPT said:
ChatGPT
Thank you for providing the context! This helps clarify the relationship between the premises and the conclusion. Let's break it down:
Premise 1:
"If no idealized Generative Pre-trained Transformer is possible for an instance of something of type T, then that instance of something of type T has no truth."
Now that we have the context of atheism being an instance of something of type T, the first premise seems to suggest that for something to be true, there must be an idealized GPT that can represent or process it accurately. This suggests that the concept of truth is somehow tied to the possibility of an idealized GPT being able to handle or address the instance of T.
However, whether or not an idealized GPT can exist for any given instance of something (such as atheism) doesn't necessarily determine whether that instance has truth. Truth is often considered independent of technological or computational capabilities. For example, the truth of philosophical or metaphysical claims, like those in atheism, isn't contingent on whether a machine can perfectly simulate or capture it.
Premise 2:
"Atheism is an instance of something of type T."
Atheism is being treated as an instance of a broader category of things, so in this argument, atheism is a candidate for evaluation through the lens of the first premise. The premise implies that atheism (as a worldview, belief, or position) fits into this "type T," meaning it could be analyzed with the criteria set out in the first premise.
Conclusion:
"Therefore, if no idealized Generative Pre-Trained Transformer is possible for atheism, then atheism has no truth."
The conclusion follows logically from the first premise, given that atheism is an instance of T. However, the argument's structure relies on an assumption that seems questionable: the idea that truth is linked to the possibility of creating an idealized GPT for a concept.
Evaluating the Truth of Premise 1:
Premise 1 assumes that the truth of any instance of something (in this case, atheism) is dependent on the feasibility of creating an idealized GPT. This seems like a problematic assumption, as the truth of a concept or belief system (like atheism) is generally understood to be independent of any computational or technological capabilities. Truth is typically regarded as something that exists regardless of whether a machine can model or simulate it.
Therefore, Premise 1 is likely false, or at least highly debatable, because it ties the concept of truth to the possibility of a machine (a GPT) representing that truth, which doesn't align with traditional understandings of truth in philosophy.
In summary, while the argument follows logically within the confines of the premises, the truth of the first premise itself is questionable. The first premise seems to presuppose that truth depends on the technological capability of a GPT model, which doesn't hold up in conventional philosophical discussions about truth. Would you like to explore how this argument could be revised or made clearer?
Although, I don't fully grasp all the terminology, I find this argument very fascinating. What are your thoughts on ChatGPT's reply?
Re: Article 18: Freedom of Thought or License for Falsehood?
Posted: Mon Jan 13, 2025 7:26 am
by accelafine
Here we go. FFS Gary. They really do see you coming a mile away

Re: Article 18: Freedom of Thought or License for Falsehood?
Posted: Mon Jan 13, 2025 7:34 am
by Gary Childress
accelafine wrote: ↑Mon Jan 13, 2025 7:26 am
Here we go. FFS Gary. They really do see you coming a mile away
Well, to be fair, he has made an argument and I'm interested in how well his argument holds up. For all I know, he may be a genius who has discovered what no other human being has yet discovered with regard to the question of whether there is a God or not. I'm interested in finding out more about his argument because such topics are of high interest to me.
Re: Article 18: Freedom of Thought or License for Falsehood?
Posted: Mon Jan 13, 2025 7:40 am
by godelian
Gary Childress wrote: ↑Mon Jan 13, 2025 7:22 am
Although, I don't fully grasp all the terminology, I find this argument very fascinating. What are your thoughts on ChatGPT's reply?
ChatGPT does not necessarily reason about the truth within the confines of Tarski's semantic theory of truth:
ChatGPT: If a theory has no model then it has no truth
This statement reflects a particular perspective in the philosophy of logic and mathematics, especially within model theory, a branch of mathematical logic. Here's a deeper analysis:
Understanding the Statement:
Theory: A set of statements or propositions, often within a formal system, that are taken to hold according to certain axioms and inference rules.
Model: A structure that satisfies the axioms of the theory, providing a concrete interpretation under which the statements of the theory are true.
Truth: Truth here is often understood as semantic truth—statements in the theory are "true" if they hold within a given model.
The statement implies:
If there is no model for a theory, the theory is inconsistent (its axioms lead to contradictions).
In such a case, no interpretation can make the statements of the theory true, meaning the theory has no semantic truth.
Logical Basis:
In classical model theory, a theory is consistent if and only if it has a model (by the completeness theorem of first-order logic, due to Gödel). If a theory has no model, it is inconsistent, and its axioms cannot all be true simultaneously.
Philosophical Implications:
Platonism: A Platonist might argue that a theory without a model fails to correspond to any objective reality and hence has no truth.
Formalism: A formalist might focus on the internal consistency of a theory rather than its semantic truth.
Constructivism: Constructivists might emphasize the existence of a model as essential for a theory to have any meaningful application or truth.
Broader Considerations:
Not all logical systems equate truth solely with the existence of a model. For example, paraconsistent logics allow some theories to tolerate contradictions without being dismissed outright.
In metaphysics or epistemology, "truth" might be defined differently, detached from formal models.
In summary, the statement aligns well with classical logic but could be debated in other logical frameworks or philosophical contexts.
I think that I mentioned at some point that this claim presupposes Tarski's semantic theory of truth. There are other truth theories possible. However, these alternative truth theories generally tend to be incompatible with mathematics.
By the way, a GPT is just a program that proves the existence of a model.
Re: Article 18: Freedom of Thought or License for Falsehood?
Posted: Mon Jan 13, 2025 7:40 am
by accelafine
Gary Childress wrote: ↑Mon Jan 13, 2025 7:34 am
accelafine wrote: ↑Mon Jan 13, 2025 7:26 am
Here we go. FFS Gary. They really do see you coming a mile away
Well, to be fair, he has made an argument and I'm interested in how well his argument holds up. For all I know, he may be a genius who has discovered what no other human being has yet discovered with regard to the question of whether there is a God or not. I'm interested in finding out more about his argument because such topics are of high interest to me.
Yes. I'm sure that really likely

He's a creep. There is no such thing as 'atheism'. It's just a ridiculous word anyway.
Re: Article 18: Freedom of Thought or License for Falsehood?
Posted: Mon Jan 13, 2025 7:56 am
by Gary Childress
godelian wrote: ↑Mon Jan 13, 2025 7:40 am
Gary Childress wrote: ↑Mon Jan 13, 2025 7:22 am
Although, I don't fully grasp all the terminology, I find this argument very fascinating. What are your thoughts on ChatGPT's reply?
ChatGPT does not necessarily reason about the truth within the confines of Tarski's semantic theory of truth:
ChatGPT: If a theory has no model then it has no truth
This statement reflects a particular perspective in the philosophy of logic and mathematics, especially within model theory, a branch of mathematical logic. Here's a deeper analysis:
Understanding the Statement:
Theory: A set of statements or propositions, often within a formal system, that are taken to hold according to certain axioms and inference rules.
Model: A structure that satisfies the axioms of the theory, providing a concrete interpretation under which the statements of the theory are true.
Truth: Truth here is often understood as semantic truth—statements in the theory are "true" if they hold within a given model.
The statement implies:
If there is no model for a theory, the theory is inconsistent (its axioms lead to contradictions).
In such a case, no interpretation can make the statements of the theory true, meaning the theory has no semantic truth.
Logical Basis:
In classical model theory, a theory is consistent if and only if it has a model (by the completeness theorem of first-order logic, due to Gödel). If a theory has no model, it is inconsistent, and its axioms cannot all be true simultaneously.
Philosophical Implications:
Platonism: A Platonist might argue that a theory without a model fails to correspond to any objective reality and hence has no truth.
Formalism: A formalist might focus on the internal consistency of a theory rather than its semantic truth.
Constructivism: Constructivists might emphasize the existence of a model as essential for a theory to have any meaningful application or truth.
Broader Considerations:
Not all logical systems equate truth solely with the existence of a model. For example, paraconsistent logics allow some theories to tolerate contradictions without being dismissed outright.
In metaphysics or epistemology, "truth" might be defined differently, detached from formal models.
In summary, the statement aligns well with classical logic but could be debated in other logical frameworks or philosophical contexts.
I think that I mentioned at some point that this claim presupposes Tarski's semantic theory of truth. There are other truth theories possible. However, these alternative truth theories generally tend to be incompatible with mathematics.
OK. Fair enough. I couldn't at this point begin to comment on "Tarski's semantic theory of truth" however, from the sounds of what you are telling me and what ChatGPT is telling me, it doesn't appear that your argument is incontrovertible at best. At worst, it could be just plain false if ChatGPT is correct about it not necessarily being the case that a GPT must be possible for something in order for it to be true.
Problem 1: We would have to determine if Tarski's semantic theory of truth is true or at least the best one of existing truth theories.
Problem 2: Even if it is the best one of existing truth theories, we would still need to determine if Tarski's corresponds 100% with what is the case in the universe. I mean, Tarski's theory could be the best theory we have to go on right now, however, we could also be a long way from finding the "true" "truth theory" that is the case in the universe. I mean, someone could come along tomorrow and demonstrate that Tarski's theory has problems or that there is a better truth theory.
On the other hand, perhaps there is a God and my points are moot. I'm agnostic with respect to most theological questions, so I'm open to the possibility of there being a God as well.
In the final analysis, I suspect we are still shy of an incontrovertible proof of the existence of God. And not only that, but there is still the issue of which version of God is the true one? Who, if anyone has figured out the mind of God most accurately.
Re: Article 18: Freedom of Thought or License for Falsehood?
Posted: Mon Jan 13, 2025 7:56 am
by godelian
Gary Childress wrote: ↑Mon Jan 13, 2025 7:34 am
For all I know, he may be a genius who has discovered what no other human being has yet discovered with regard to the question of whether there is a God or not. I'm interested in finding out more about his argument because such topics are of high interest to me.
I did not prove the existence of God.
Gödel came up with an ontological proof, but I am not necessarily a fan of it:
https://en.wikipedia.org/wiki/G%C3%B6de ... ical_proof
I merely argued that the absence of atheismGPT means that atheist theory has no model and therefore no truth. It is simply not possible to produce a model-existence proof for atheism. Islam has an IslamGPT which proves the existence of a model and therefore the existence of a truth for Islamic theory.
As ChatGPT has pointed out, this argument ultimately rests on Tarski's semantic theory of truth.
Re: Article 18: Freedom of Thought or License for Falsehood?
Posted: Mon Jan 13, 2025 8:01 am
by Gary Childress
godelian wrote: ↑Mon Jan 13, 2025 7:56 am
Gary Childress wrote: ↑Mon Jan 13, 2025 7:34 am
For all I know, he may be a genius who has discovered what no other human being has yet discovered with regard to the question of whether there is a God or not. I'm interested in finding out more about his argument because such topics are of high interest to me.
I did not prove the existence of God.
Gödel came up with an ontological proof, but I am not necessarily a fan of it:
https://en.wikipedia.org/wiki/G%C3%B6de ... ical_proof
I merely argued that the absence of atheismGPT means that atheist theory has no model and therefore no truth. It is simply not possible to produce a model-existence proof for atheism. Islam has an IslamGPT which proves the existence of a model and therefore the existence of a truth for Islamic theory.
As ChatGPT has pointed out, this argument ultimately rests on Tarski's semantic theory of truth.
Do we know for certain that an atheismGPT is not possible? And if so, why is an atheismGPT not possible?
Re: Article 18: Freedom of Thought or License for Falsehood?
Posted: Mon Jan 13, 2025 8:10 am
by godelian
Gary Childress wrote: ↑Mon Jan 13, 2025 7:56 am
At worst, it could be just plain false if ChatGPT is correct about it not necessarily being the case that a GPT must be possible for something in order for it to be true.
The model-existence proof does not need to be a GPT. It can be any program. The Curry-Howard correspondence does not require a GPT. It only requires a terminating program. Every GPT is a program. However, not every program is a GPT.
Gary Childress wrote: ↑Mon Jan 13, 2025 7:56 am
Problem 1: We would have to determine if Tarski's semantic theory of truth is true or at least the best one of existing truth theories.
It is the only truth theory that is compatible with mathematics.
Gary Childress wrote: ↑Mon Jan 13, 2025 7:56 am
Problem 2: Even if it is the best one of existing truth theories, we would still need to determine if Tarski's corresponds 100% with what is the case in the universe.
Physical truth is just one small part of the truth. A truth theory that cannot handle the abstract, Platonic truth of the arithmetical universe is useless in the context of the truth of mathematics or of other abstract subjects.
Gary Childress wrote: ↑Mon Jan 13, 2025 7:56 am
In the final analysis, I suspect we are still shy of an incontrovertible proof of the existence of God.
Gödel introduced five axioms from which he proves a God-like object. This obviously does not solve the problem, because there is no proof for these five axioms. It is ultimately an exercise in infinite regress. There are people who are still interested in that. I am not.
A proof always "proves something from X". Then, the question will always arise "From what exactly can we prove X?"
That is infinite regress. That is uninteresting and ultimately useless. Aristotle already argued at length in "Posterior Analytics" why it is pointless to do that.