> The philosophy of nonexisting things can be confusing
This comment hit a raw nerve, and tied many things in my own understanding.
Because concepts can depict non-existing things, we have to learn via feedback from experience "operationally". Operational meaning by action in the real world. And, language and imagination can create concepts which have no ground truth even though they may exist in the "inter-subjective" reality created by people among themselves. Religion is one such inter-subjective reality. It explains the scientific method, and why that was needed and has been successful to cut through the mass of concepts that make no sense operationally. It explains why the formalism of math/science have been successful to depict concepts operationally and not natural language. And, ties into the recent podcast of Sutton who mentions that LLMs are a dead-end from the perspective that they cannot create ground-truth via experience and feedback - they are stuck in token worlds.
But, concept-creation and assigning a symbol to it is a basic act of abstraction. When it is not grounded, it could become inconsistent and go haywire or when very consistent it becomes robotic and un-interesting. As humans, we create a balance with imagination to create concepts which make things interesting which are then culled with real world experience to make it useful.
This comment hit a raw nerve, and tied many things in my own understanding.
Because concepts can depict non-existing things, we have to learn via feedback from experience "operationally". Operational meaning by action in the real world. And, language and imagination can create concepts which have no ground truth even though they may exist in the "inter-subjective" reality created by people among themselves. Religion is one such inter-subjective reality. It explains the scientific method, and why that was needed and has been successful to cut through the mass of concepts that make no sense operationally. It explains why the formalism of math/science have been successful to depict concepts operationally and not natural language. And, ties into the recent podcast of Sutton who mentions that LLMs are a dead-end from the perspective that they cannot create ground-truth via experience and feedback - they are stuck in token worlds.
But, concept-creation and assigning a symbol to it is a basic act of abstraction. When it is not grounded, it could become inconsistent and go haywire or when very consistent it becomes robotic and un-interesting. As humans, we create a balance with imagination to create concepts which make things interesting which are then culled with real world experience to make it useful.