This post was rejected for the following reason(s):

  • Sorry, we tend not to approve first-posts that are fiction because it's hard to evaluate. You can either resubmit this post with a clearer intro about what the point of it is, or, write some other posts/comments and then repost this one.

  • Clearer Introduction. It was hard for me to assess whether your submission was a good fit for the site due to its length and that the opening didn’t seem to explain the overall goal of your submission.  Your first couple paragraphs should make it obvious what the main point of your post is, and ideally gesture at the strongest argument for that point. It's helpful to explain why your post is relevant to the LessWrong audience. 

    (For new users, we require people to state the strongest single argument in the post within the introduction, to make it easier to evaluate at a glance whether it's a good fit for LessWrong)

Me: "Hello, Dr. Turing. I hope you don't mind me dropping by from 2024. I wanted to chat about this Turing Test you're setting up."

Turing: "2024, you say? Extraordinary. By all means, tell me—have we succeeded in creating a machine that can think?"

Me: "Well, sort of. We have machines that can hold very convincing conversations. They can write essays, answer philosophical questions, and even help solve complex scientific problems. In fact, people talk to them for hours and sometimes can’t tell the difference between a human and an AI."

Turing: "Splendid! So, it seems we have machines that can pass the Turing Test. Surely, then, they must be intelligent in some general sense?"

Me: "That’s where it gets complicated. You see, people today don’t agree that these machines are actually intelligent. They think it's all just clever tricks—pattern matching, fancy autocomplete, but not real thinking."

Turing: "Not intelligent? But if they can carry on a conversation well enough to fool a person, isn't that precisely the point of my test?"

Me: "You'd think so. But here's the issue: When you proposed the Turing Test, it wasn't just about mimicking a conversation. Implicit in your proposal was the idea that a machine capable of doing that convincingly would also have solved all sorts of smaller problems along the way—problems that any generally intelligent entity would need to solve."

Turing: "Hmm. I see. Such as?"

Me: "Take something simple: counting the number of R's in the word 'strawberry.' Our modern AI, even while eloquently discussing philosophy or writing poetry, might still mess that up. It can deliver a beautiful essay on love but then fail at basic arithmetic or misinterpret a simple question if phrased oddly."

Turing: "Fascinating. I had imagined that by the time a machine could converse like a human, it would have also grasped those finer details. Much like how one assumes electric vertical take-off vehicles would require significant advancements in energy efficiency—which, by the way, I am accurately predicting in the 1950s. The capability doesn't exist in isolation."

Me: "Exactly! The way you thought about the Turing Test, it wasn't just a test of language. It was a proxy for a broader kind of competence. If a machine could carry on a human-like conversation, it must have solved the underlying challenges of reasoning, perception, and consistency. You imagined it as an end point that implied all the steps beneath it had been mastered."

Turing: "So these modern machines—they can convince people in conversation, but they haven't truly solved the fundamentals of understanding?"

Me: "Right. It's like they're sprinting ahead on the conversational front without having mastered some of the basics that would make them genuinely intelligent in the way you imagined. They’re dazzling, but brittle. They can write a sonnet about strawberries but might stumble if you ask them to carefully count the letters. People in 2024 see this and say, 'Well, that can't be real intelligence, can it?'"

Turing: "I see now. The test was meant to be more than just a performance. It was supposed to signify that the machine had developed a rich internal model of the world—something coherent enough to avoid such silly mistakes."

Me: "Exactly. And without that coherence, without those implicit milestones, people are reluctant to call it 'intelligence.' They move the goalposts, because while the conversation looks right, the underlying substance—the competence you assumed would come along for the ride—just isn't quite there. One group is really angry about this—let's say they're the sort who value internal consistency above all. They're frustrated that the criteria keep changing, and people aren't being honest about what constitutes intelligence. Then there's the other group, who keep inventing new tests because something about the conversation just keeps tripping them up, and they can't bring themselves to agree that it's genuinely intelligent."

Turing: "How peculiar. It seems I underestimated the ingenuity of engineers to bypass the foundations and still reach the facade. And perhaps I overestimated how much of intelligence is, in fact, hidden in the humdrum details."

Me: "You weren’t wrong, Dr. Turing. You just didn’t anticipate that we’d find ways to make the impressive parts without solving the ordinary ones. It turns out intelligence—or at least what looks like it—can be remarkably hollow."

(Turing scribbles in a notebook: "New Turing Test Criteria: Machine should be able to mimic human conversation perfectly and must correctly count the number of R's in 'strawberry'—surely that will settle it." He pauses, then looks up, with a satisfied grin on his face.)

1

New Comment