I don't think I completely agree with this. A formal language comes with an unambiguous grammar/syntax, it's an integral part of the whole system. If a statement cannot be parsed, this is within that whole system even if it's not in the language as such. The rules for parsing a formal language should be completely rational (one could say: mathematical in a broad sense), and as such don't need any human beings to interfere; a turing machine will do. The system-at-large (math's formal language plus the parser) can tell you a statement is nonsensical, without any human interference. In a running computer, one can argue math does exist by itself, and does itself.
I cannot see how adding the rules of parsing to the system does in any way affect my point. It is still a human being which must pass judgement whether the rules have been applied correctly. In the same way, any computer program designed to help with such a task must be checked for correctness by some human agent.
It is not the formal system that is rational, but the human being which applies it. To ascribe the property of rationality to a formal system makes no sense, since a formal system cannot act rationally. Indeed, it cannot act at all. Neither does a computer programmed according to the rules of a formal system act rationally. It does not act at all. It merely runs instruction sets according to its programming.
As you note, the parsing of a formal language FL does not take place within the language itself. Parsing is something done in addition
to the definition of FL (which obviously also occurs outside of FL). Parsing is a part of the application
of FL. No language can check the grammatical correctness of its own statements. Only the users
of a language can do that. It is not by a linguistic act
that an ungrammatical statement is found to be lacking in sense. (Though you can
use language to express
the fact that you have found a statement to be devoid of sense. And the same language can also be used to show why a statement is to be regarded as senseless.) It is me
, not the English language or the grammar of English, who discovers "giraffe blunt cellar works" to be ungrammatical. In order to make that discovery, I apply
the rules of English grammar.
A Turing machine will not do at all here, since it cannot find any statement whatsoever to be grammatical or ungrammatical. It can only generate strings of symbols according to the rules it was programmed with. "Sense" doesn't even enter the picture here. Only a human being can find a statement to be grammatical or ungrammatical - if aided by a Turing machine, by making sense of
what the TM has put out.
The system-at-large cannot "tell" me anything at all. As Wittgenstein once noted, all mathematical statements say
exactly the same, namely nothing. Note I had to put "tell" in quotes because of its metaphorical use. This is not nitpicking but an important distinction, since a formal system "telling" me something is just a figure of speech
which is used to convey the notion of me using the formal system
, applying it, to hopefully make discoveries which can answer the questions I am trying to resolve. It is me
who can make discoveries with the aid of formal systems. Formal systems do not have the ability to discover. To ascribe to them the power of making discoveries is obviously completely nonsensical. In the same way, a computer running a program cannot be said to be "doing math", for it cannot "do" anything.
I can switch on a computer and have it execute computations. It will however only do (metaphorical "do" here) so on my instigation. The mere flowing of electrical currents within silicon structures does not constitute "doing math". That is not how we use that expression normally. You will probably call this nitpicking, but it is actually a very important distinction which I explained above and in my previous posting. What is happening inside computers is quite different from "doing math". Only when a human operates the machine can "doing math" be intelligibly said to be happening, and even then it's the human who does it with the aid of the machine, it is not the machine itself doing it.
Could someone with no formal mathematical training read a textbook full of formulas and proofs and figure out what it is good for? The answer is no, not because the symbolic transformations shown would be too cryptic, but because he or she was never shown
how to apply it. The symbolic manipulations and transformations of math can be learned by rote with little or no understanding of what use they have. But if the relation the formalism stands in with the rest of the world is not explained, it will be just ink on paper. The explanation of its use is something which has to be given in addition
to the formal system. I mention this to emphasize the importance of the difference between algorithmic procedures (= following rules) and using algorithmic procedures to achieve some end. Computers can be programmed so they will "follow" certain rules (metaphorical use of "follow" here, since the computer has no concept of what is happening when it is running programs). But they can not be made to set out to achieve ends of their own, since they are incapable of showing the appropriate behaviour which would warrant the ascription of volition and/or goal-oriented action to them.
Of course, the system itself was set up by humans, but one could say that about everything we study, model, etc. There's usually the condition that both input and output have to be understandable by humans.
I think there is no "but" here. This is actually the crux of the problem. All of our searching for truth, knowledge, facts, etc. hinges on human capacities, not the least of which is our language. Our understanding of the world, including our own capacities, is often crucially dependent on that. There could not be any notion of "truth" without language. The belief that our use of the word "truth" makes sense because there is some kind of independent, "absolute truth" "out there" which our term "truth" refers to is part of the philosophy of Plato. This is a metaphysical notion (one which I do not subscribe to), not a scientific one, which is nonetheless clandestinely applied in many "scientific" arguments, especially those about the nature of reality.
The real point for me here is, that there's no significant difference with natural language when it comes to labelling a statement nonsensical. If I say "table tomorrow breakfast jump", we label that as nonsense because our natural language parser rejects it.
Very interesting twist of logic here. You assume that human beings have a "natural language parser" which decides whether a statement is grammatical ("parses") or not. So you are from the start
taking the view that man's capacity for language can be modelled on a computer paradigm using algorithms. This is a self-defeating move, since the difference between formal languages and natural languages is precisely what is at stake here. You're presupposing the result you want to argue for.
The difference being that natural language is "fuzzy", meaning we can say some sentences are slightly sensical, or slightly nonsensical, etc.
Putting "fuzzy" in quotes once more signifies metaphorical use. It isn't really clear what is supposed to be meant by the term here. What exactly is "fuzzy" about natural language? Of course you could hope that certain ambiguities of colloquial expressions can be eliminated by replacing them with more strictly defined terms in a formal language. But the definitions of those terms will, naturally, have to use natural language. Natural language is always what it comes down to in the end. We simply do not have anything else we can use or appeal to. There is no way around it, and I think it's actually easy to see that there isn't. The final link in the chain is always a human being. So if you think natural language is somehow tainted by "fuzziness", then so is any formal language, since it must necessarily appeal to and build on natural language.
The notion of a statement being "slightly sensical" is equally unclear. A statement either makes sense or it doesn't. There are no shades of grey in the category of sense.
But we could create fuzzy math language too, if we wanted to. We probably don't want to, because we want math sentences to be parsed rationally, independently of and not needing a human's subjectivity.
The fact that rationality cannot be equated with algorithmic procedures was what I tried to show at the beginning of this thread. There is no such thing as "rational parsing". "Rational parsing" as opposed to what? Whether a "parsing violation" has occured or not is for a human being to say, not for some set of rules, since rules can a) say nothing and b) are no use in the task of checking whether they were applied correctly. (What do you do if your task is to check whether "come home every tuesday" was applied correctly? You go to the person's home on tuesday and see if he is there. The rule itself will not help you with looking around and identifying the person in question. The act of checking is one thing. The rule prompting you to do so is another.)
"Subjectivity" is another term which I find hard to understand in this context. How could rules be parsed or followed "objectively"? As explained above, computers cannot follow rules, only humans can.