This is likely an elementary question to logicians or theoretical computer scientists, but I'm less than adequately informed on either topic and don't know where to find the answer. Please excuse the various vague notions that appear here, without their appropriate formal setting. My question is exactly about what I should have said instead of what follows:
When we make a definition, of either a property, or the explicit value of a symbol, it seems that we are somehow changing the language. Prescribing meaning to a word might be viewed as a kind of transformation of the formal language akin to taking a quotient, where we impose further relations on a set of generators.
I don't know how to describe a 'semantic' object, but am assuming an ad-hoc definition could be as a class of words under an equivalence relation supplied by an underlying logic. If the complexity of such an object is the size of the smallest word in the language that describes it, then making a definition lowers the complexity of some objects (and doesn't change the rest.) The obvious example is that if I add the word group to my language, then saying G is a group is a lot shorter than listing its properties.
It seems that lowering complexity is a main point of making definitions. Further, that one reason mathematical theory-building works, is a compression effect through which I am able to use less resources to describe more complex objects, at the cost of the energy it takes to cram definitions from a textbook.
Likely there is some theory out there that describes this process, but I've not been able to google it. I would appreciate being pointed towards the right source, even if it's a wikipedia link. Specifically:
Where can I find a theory of formal logic or complexity theory that studies the process of adding definitions to a mathematical language, viewed as a transformation that changes complexity?
No comments:
Post a Comment