Jun. 12th, 2022

conuly: (Default)
About LaMDA and the man who thinks it's become self-aware.

I obviously have no idea if the AI has spontaneously become self-aware. Naturally, my sci-fi loving heart would really like that, especially if it's self-aware and agrees with me on all the finer points of ethics, but things don't happen just because I want them to... and honestly, this thing seems a lot less likely than other things I'd love to see.

But lemme tell you a story. Way back in freshman year of college, my linguistics professor relayed a story from his life. One day, as he said, he was unexpectedly subbing in for somebody else who had, idk, a long-term family emergency or something, and so he found himself at a strange blackboard effortlessly putting together sentences in Swahili, without at any point checking this other person's notes.

The students thought he spoke Swahili. He does not speak Swahili. It's just that Swahili tends to feature prominently in Linguistics 101 textbooks and courses, and after years of doing grammatical exercises in Swahili in front of a class, and grading their work, he can pretty easily plug in morphemes wherever he likes. And so long as he's never called upon to do anything that wouldn't appear as an example in an introductory linguistics textbook, he can probably do it for a very long time. But knowing how to put together morphemes and organize them into sentences is not the same as being able to speak Swahili, even if the end result is a grammatically correct and totally unremarkable utterance.

Nope, he's just a Swahili room.

Computers are good at doing things that humans aren't so good at doing, like analyzing really enormous chunks of unsorted data and then redoing the thing the data does, so presumably LaMDA is a lot better at making plausible English conversation than my old professor would be at carrying on a plausible, if incomprehensible (to him) conversation in Swahili.

But being able to grammatically and coherently answer a question about your deepest hopes and dreams is not the same as actually having hopes and dreams.

If we want to test if an overpowered chatbot is actually a person, we can't pretend like we're in one of the more Data-centric episodes of TNG. We've got to do something better, that tests something other than what we're already testing, namely, "how good is this at making plausible responses to questions?"

I don't know what that test would look like, exactly, but I'm certain that this isn't it.

Profile

conuly: (Default)
conuly

December 2025

S M T W T F S
  1 2 3 4 5 6
78 9 10 11 12 13
14 15 16 17 18 19 20
21 222324252627
28293031   

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Dec. 25th, 2025 04:49 pm
Powered by Dreamwidth Studios