The Turing Test is a challenging first-person puzzle game set onJupiters moon, Europa. You are Ava Turing, an engineer for theInternational Space Agency (ISA) sent to discover the cause behindthe disappearance of the ground crew stationed there. Upon arrivala series of puzzles awaits you tests which, according to thestations AI, Tom, can only be solved by a human
One kind of language game GPT-3 can be said to engage competently in is that given a set of instructions it produces an output compliant with those instructions. Such a description of this ability, highlighting that complying with an instruction is a regularity in language, is easily explainable in statistical terms, whereas in humans this would require semantic competence to understand and implement the meaning of the instructions. We know GPT can perform such feats, as this is exactly what Brown et al., (2020) labelled zero-shot learning tasks and OpenAI provides prompts which can make GPT-3 engage in tasks such as: creating or summarizing study notes, explaining code in plain language, simplifying the language of a text, structuring data, or classifying the content of tweets (OpenAI, 2021). How can a task description be enough to convey the semantic relationship to a completion?
Many different abilities have been proposed as being required to pass the Turing Test (e.g. conversational fluency, shared attention and motivation; Montemayor 2021). As the job of the interrogator is to test the respondent on one of these, as to reveal its non-humanity, the strategy of which weakness the interrogator will exploit changes how hard the test will be to pass. We thus need to specify which ability we will be testing, but if even one of these narrower definitions of the Turing Test fails to be a game, we will know that the Turing Test in general is not a game GPT can play. 2b1af7f3a8