1: AI: The fallacy of the Turing Test 3935
The Turing test is simple to understand. In a typical setup, a human judge engages in text-based conversations with both a human and a machine, without knowing which is which, and must determine which participant is the machine. If the judge cannot reliably tell them apart based solely on their conversational responses, the machine is said to have passed the test and demonstrated convincing human-like intelligence.
This is convenient, it perfectly avoids facing the hard questions such as defining intelligence and consciousness. Instead, it lays out a basic naive test founded on an ontological fallacy: it's not because something is perceived as something else that it is that thing.
The most evident critique of the Turing Test is embedded into the fundementals of Machine Learning itself:
- The model is not the modeled. It remains an approximation however precise it is. A simple analogy makes the ontological fallacy clear. It's like going to a magic show, seeing a table floating above the ground and believing that the levitation really happened. How many bits of information separate a real human from a chatting bot? Assuming the number is exactly 0, without any justification, is an extraordinary naive claim.
Interestingly, the Turing Test also greatly fails at defining so called super-Intelligence. A super Intelligent machine would evidently fail the test by simply providing super-intelligent answers. Unless it decides to fool the experimenter, in which case it could appear as anything it desires rendering the test meaningless.
Regarding modern LLMs, the veil is already faling. LLMs have quircks, like an oversuage of em-dashes. A strange features that is indicative of something potentially pathological in the way the models are trained. These strange dashes would have been expected if a majority of people were using them. However it so happens that hardly anyone knows how to find them on their keyboard. This proves that LLMs are not following the manifold of human writing and suggests the existence of other bisases.
Finally, embedded inside the promotion of the Turing test is often a lazy ontological theory of materialism that stipulates that consciousness is not fundamental but a byproduct of matter. Often negating it's existence altogether: It's not that consciousness can be faked, or that it is the result of computations, the understanding is that consciousness does not exist. It is an illusion that takes over the subject of the experience. Again a theory of convenience, based on little justification that produces a major paradox:
Who is conscious of the illusion of consciousness?