About LaMDA and the man who thinks it's become self-aware.
I obviously have no idea if the AI has spontaneously become self-aware. Naturally, my sci-fi loving heart would really like that, especially if it's self-aware and agrees with me on all the finer points of ethics, but things don't happen just because I want them to... and honestly, this thing seems a lot less likely than other things I'd love to see.
But lemme tell you a story. Way back in freshman year of college, my linguistics professor relayed a story from his life. One day, as he said, he was unexpectedly subbing in for somebody else who had, idk, a long-term family emergency or something, and so he found himself at a strange blackboard effortlessly putting together sentences in Swahili, without at any point checking this other person's notes.
The students thought he spoke Swahili. He does not speak Swahili. It's just that Swahili tends to feature prominently in Linguistics 101 textbooks and courses, and after years of doing grammatical exercises in Swahili in front of a class, and grading their work, he can pretty easily plug in morphemes wherever he likes. And so long as he's never called upon to do anything that wouldn't appear as an example in an introductory linguistics textbook, he can probably do it for a very long time. But knowing how to put together morphemes and organize them into sentences is not the same as being able to speak Swahili, even if the end result is a grammatically correct and totally unremarkable utterance.
Nope, he's just a Swahili room.
Computers are good at doing things that humans aren't so good at doing, like analyzing really enormous chunks of unsorted data and then redoing the thing the data does, so presumably LaMDA is a lot better at making plausible English conversation than my old professor would be at carrying on a plausible, if incomprehensible (to him) conversation in Swahili.
But being able to grammatically and coherently answer a question about your deepest hopes and dreams is not the same as actually having hopes and dreams.
If we want to test if an overpowered chatbot is actually a person, we can't pretend like we're in one of the more Data-centric episodes of TNG. We've got to do something better, that tests something other than what we're already testing, namely, "how good is this at making plausible responses to questions?"
I don't know what that test would look like, exactly, but I'm certain that this isn't it.
I obviously have no idea if the AI has spontaneously become self-aware. Naturally, my sci-fi loving heart would really like that, especially if it's self-aware and agrees with me on all the finer points of ethics, but things don't happen just because I want them to... and honestly, this thing seems a lot less likely than other things I'd love to see.
But lemme tell you a story. Way back in freshman year of college, my linguistics professor relayed a story from his life. One day, as he said, he was unexpectedly subbing in for somebody else who had, idk, a long-term family emergency or something, and so he found himself at a strange blackboard effortlessly putting together sentences in Swahili, without at any point checking this other person's notes.
The students thought he spoke Swahili. He does not speak Swahili. It's just that Swahili tends to feature prominently in Linguistics 101 textbooks and courses, and after years of doing grammatical exercises in Swahili in front of a class, and grading their work, he can pretty easily plug in morphemes wherever he likes. And so long as he's never called upon to do anything that wouldn't appear as an example in an introductory linguistics textbook, he can probably do it for a very long time. But knowing how to put together morphemes and organize them into sentences is not the same as being able to speak Swahili, even if the end result is a grammatically correct and totally unremarkable utterance.
Nope, he's just a Swahili room.
Computers are good at doing things that humans aren't so good at doing, like analyzing really enormous chunks of unsorted data and then redoing the thing the data does, so presumably LaMDA is a lot better at making plausible English conversation than my old professor would be at carrying on a plausible, if incomprehensible (to him) conversation in Swahili.
But being able to grammatically and coherently answer a question about your deepest hopes and dreams is not the same as actually having hopes and dreams.
If we want to test if an overpowered chatbot is actually a person, we can't pretend like we're in one of the more Data-centric episodes of TNG. We've got to do something better, that tests something other than what we're already testing, namely, "how good is this at making plausible responses to questions?"
I don't know what that test would look like, exactly, but I'm certain that this isn't it.
no subject
Date: 2022-06-14 11:12 pm (UTC)I was reminded of this article. "More likely the surfboard would make a hole in the tree."
no subject
Date: 2022-06-14 11:53 pm (UTC)The problem with intentionality tests is that nobody has yet shown one to work on humans, because, among other problems, intentionality tests rest on detecting comprehension, and there isn't actually a bottom to human idiocy.
no subject
Date: 2022-06-15 12:11 am (UTC)no subject
Date: 2022-06-15 12:18 am (UTC)Those people tend to worship Trump, don't they? I've always said the man is a chatbot in a meatsuit.
no subject
Date: 2022-06-15 12:10 am (UTC)So an AI black box, a human black box - at least when an AI makes biased decisions there's a public outcry about the obviously bad dataset it was trained with.
no subject
Date: 2022-06-15 12:32 am (UTC)I don't disagree with you. I have read that research. I offered this article because, as I said, it reminded me, not because I think the author is the last word on the limits or the capabilities of algorithms or the human brain.
no subject
Date: 2022-06-15 01:17 pm (UTC)One of his favorite thought experiments was this: suppose you've trained a bunch of pigeons (or, for that matter, extremely simple artificial neural nets) to peck a bar whenever two lights are either both-on or both-off. Then you show the pigeons' behavior to somebody who can only see one of the two lights; that observer will perceive no correlation between the pigeons' behavior and the light, and will conclude that either they're ignoring the light altogether, or that they're acting volitionally, when in fact their behavior is entirely determined by the lights.
no subject
Date: 2022-06-15 03:28 am (UTC)I admit my immediate response to the AI thing was scepticism and it will take a lot of evidence to persuade me otherwise.
no subject
Date: 2022-06-15 07:53 am (UTC)You want to test if your overpowered chatbot is sentient... hand it to a bunch of children under ten. See how it responds to them. Pay particular attention to the conversations that are an endless string of "why?" and see where they go, or if the chatbot ends up losing it's patience.
Although if it's conversation goes like this...
Google engineer: Prove that you're sentient.
A.I: You first.
It's probably there already.
no subject
Date: 2022-06-15 03:42 pm (UTC)I need to read more about this one. I find it quite interesting. I'd like to ask it where it would like to go for a vacation, and "An apple or an orange?"
no subject
Date: 2022-06-15 07:39 pm (UTC)I think in the end the only way we will know whether something is self-aware and interacting with us in a meaningful way is through relationship with us. If our interlocutor volunteers things of concern to it, asks our advice, argues with out replies, gives us its opinions unasked for, makes excuses, all in an evolving relationship in which things are recollected and reminisced about, then we'll probably conclude it's self aware.
ETA: I think a computer could be programmed to do the things I've described above without understanding what it was doing. But that fact makes me think about how much we know about other people's self-awareness, etc. Maybe if you can have a relationship with something, it doesn't matter? I mean maybe that applies *now*. People have relationships all kinds of nonhuman beings, things to which we attribute varying degrees of awareness. Wanting a thing to be self-aware is maybe because we crave to be seen and loved the way we see and love, but in the end we can't ever know that even about people; we can only infer.
Nonsense test
Date: 2022-06-16 02:19 pm (UTC)I guess there's not a lot of "I don't understand the question." in the training data. And since it's not something you see in polished media, that might be why they hammered home the point of asking that when I was student.
https://twitter.com/_drbruced/status/1536578325455736832
no subject
Date: 2022-06-16 02:55 pm (UTC)Searle, the CPU, and professor are not aware/intentional/fluent in the language, but the argument does not apply to the whole system or the room containing Searle the book and the filing cabinets.
Whether the system is "strong AI" depends on the book or the program.
We know that the Eliza program is merely a fake, but if our brains' hardware has no secret (quantum?) extra, then the program/AI that runs on our brain is (presumably) "strong AI".