How logical consistency causes a (good) combinatorial explosion
Consider two contradictory statements like, “all swans are white” and “some swans are black.” They can’t both be true. One or both must be false. Surprisingly, the discovery of such a contradiction in our ideas is cause for celebration, not despair. Why? Because it offers an essential guide to progress. Without it, we would not know which of our ideas to improve, nor how. With it, we know what’s wrong and how to fix it: resolve the contradiction.
Logical consistency is therefore a powerful constraint on our ideas. It means that any idea can in principle contradict another, sparking the search for an improvement that’s free of the contradiction. This yields a combinatorial explosion, for if any pair of, say, 10,000 ideas can conflict, that yields (roughly) 10,000 × 10,000 = 100,000,000 potential opportunities to spark progress. This also means any one idea is potentially constrained by 10,000 others. In practice, we cannot compare every pair of our ideas to check their consistency, and doing so would be a waste of time even if we could. Most pairs of ideas are irrelevant to each other. Nevertheless, each of our ideas is tremendously constrained by other, related ideas (and we can search among all our ideas to find which are relevant). To borrow the language of Darwinian evolution, the need for logical consistency subjects each of our ideas to tremendous selection pressure from other ideas.
And, knowledge-creation is all about variation and selection. In the biosphere, genes are subjected to random variation and to natural selection (e.g. predation, mating, starvation). In human minds, ideas are subjected to intentional variation and many forms of selection, usually involving some form of conflict with other ideas. Logical contradiction is just one important example. It’s these sorts of differences in the mechanisms of variation and selection which are central to explaining why human minds are vastly more capable than blind evolution at creating knowledge.
It also helps explain why human minds are qualitatively superior to today’s narrow AI algorithms, which are useless outside the narrow domain they’ve been trained for. A human idea is subject to selection by many other ideas within the mind, which are themselves subject to variation and selection (and therefore improvement). In a machine learning system, a model is subjected to only one form of selection: how well does it perform on the given task? In other words, how useful is it? First of all, this mechanism of selection is not itself open to improvement from within the system. It is subject to no variation and selection - except from without, by human minds. Secondly, even supposing different “ideas” in a model could be made to constrain one another, they are not allowed to. There is only a single, fixed form of selection: usefulness.
In a human mind, this would be like changing your ideas only in response to external reward and punishment, as David Deutsch explains in his essay, Beyond Reward and Punishment. Such an approach makes very poor use of the available knowledge in a system, quite like the way a totalitarian state suppresses all ideas but the dictator’s.
More generally, though, there is a fundamental difference between seeking ideas which are true versus seeking those which are useful. While two contradictory ideas cannot both be true, they may both be useful. This is the case for quantum theory and general relativity, both of which are famously useful and mutually incompatible. At present, many seek a unifying theory that will supersede them both. Why? Because of some practical problem? No. Their incompatibility presents a dramatic logical problem, and therefore an opportunity for improvement - one that we would be utterly blind to if we did not seek truth and thus logical consistency. Indeed, if we cared only about usefulness, the irony is that we’d be denied our most useful theories, for many were sought in response to theoretical problems rather than practical ones.
Incidentally, this is why it is a mistake to pursue only things that are known to have good consequences - altruistic or otherwise. After all, that would put an end to all research, the results of which are - by definition - unknown, and therefore the consequences of which are unknown.
In the end, the search for truth entails the pursuit of logical consistency among all our ideas, and thus takes advantage of all our knowledge - not just a single, fixed idea. It subjects our ideas to a powerful form of selection - logical contradiction - not found in biological or machine learning systems. Most importantly, it provides a combinatorial explosion of opportunities for conflict - and thus for progress.