conuly: (Default)
[personal profile] conuly
About LaMDA and the man who thinks it's become self-aware.

I obviously have no idea if the AI has spontaneously become self-aware. Naturally, my sci-fi loving heart would really like that, especially if it's self-aware and agrees with me on all the finer points of ethics, but things don't happen just because I want them to... and honestly, this thing seems a lot less likely than other things I'd love to see.

But lemme tell you a story. Way back in freshman year of college, my linguistics professor relayed a story from his life. One day, as he said, he was unexpectedly subbing in for somebody else who had, idk, a long-term family emergency or something, and so he found himself at a strange blackboard effortlessly putting together sentences in Swahili, without at any point checking this other person's notes.

The students thought he spoke Swahili. He does not speak Swahili. It's just that Swahili tends to feature prominently in Linguistics 101 textbooks and courses, and after years of doing grammatical exercises in Swahili in front of a class, and grading their work, he can pretty easily plug in morphemes wherever he likes. And so long as he's never called upon to do anything that wouldn't appear as an example in an introductory linguistics textbook, he can probably do it for a very long time. But knowing how to put together morphemes and organize them into sentences is not the same as being able to speak Swahili, even if the end result is a grammatically correct and totally unremarkable utterance.

Nope, he's just a Swahili room.

Computers are good at doing things that humans aren't so good at doing, like analyzing really enormous chunks of unsorted data and then redoing the thing the data does, so presumably LaMDA is a lot better at making plausible English conversation than my old professor would be at carrying on a plausible, if incomprehensible (to him) conversation in Swahili.

But being able to grammatically and coherently answer a question about your deepest hopes and dreams is not the same as actually having hopes and dreams.

If we want to test if an overpowered chatbot is actually a person, we can't pretend like we're in one of the more Data-centric episodes of TNG. We've got to do something better, that tests something other than what we're already testing, namely, "how good is this at making plausible responses to questions?"

I don't know what that test would look like, exactly, but I'm certain that this isn't it.

Date: 2022-06-14 11:12 pm (UTC)
sovay: (Rotwang)
From: [personal profile] sovay
But being able to grammatically and coherently answer a question about your deepest hopes and dreams is not the same as actually having hopes and dreams.

I was reminded of this article. "More likely the surfboard would make a hole in the tree."

Date: 2022-06-14 11:53 pm (UTC)
siderea: (Default)
From: [personal profile] siderea
I feel that while the author of that article is correct, they have shied away from the full implications of their evidence. While the answers the AI gave (which the author offers up as examples) are bad answers, they are, in fact, the sort of bad answers humans notoriously give. He's arguing that the AI's failure to demonstrate a grasp of basic causality is evidence of a lack of what Searle (in this Chinese Box argument) called intentionality. He should go check out the comments on Reddit: demonstration of failure to grasp basic causality is on regular display among humans. I particularly think that proposal to study after a test might be well received on r/conspiracy.

The problem with intentionality tests is that nobody has yet shown one to work on humans, because, among other problems, intentionality tests rest on detecting comprehension, and there isn't actually a bottom to human idiocy.

Date: 2022-06-15 12:11 am (UTC)
sathari: (Tori why can't it be beautiful)
From: [personal profile] sathari
I can't think of a way to say, "Congratulations, you have won the internet" that is anywhere near as witty and enjoyable as this comment deserves.

Date: 2022-06-15 12:32 am (UTC)
sovay: (Rotwang)
From: [personal profile] sovay
While I don't disagree with the author's point, I feel like I should point out that humans are rather notoriously bad at explaining, or even understanding our own reasoning processes.

I don't disagree with you. I have read that research. I offered this article because, as I said, it reminded me, not because I think the author is the last word on the limits or the capabilities of algorithms or the human brain.

Date: 2022-06-15 01:17 pm (UTC)
hudebnik: (Default)
From: [personal profile] hudebnik
One of my college Psych profs spent a lot of class time on evidence that people routinely come up with rational and/or emotional explanations for their actions after they've taken the actions: basically "these are the input data, and this is what I did in response; what chain of reasoning/feeling could have led me from those inputs to that response?" People are not actually very good at direct introspection; they reason about themselves as black-boxes almost as much as they do about other people.

One of his favorite thought experiments was this: suppose you've trained a bunch of pigeons (or, for that matter, extremely simple artificial neural nets) to peck a bar whenever two lights are either both-on or both-off. Then you show the pigeons' behavior to somebody who can only see one of the two lights; that observer will perceive no correlation between the pigeons' behavior and the light, and will conclude that either they're ignoring the light altogether, or that they're acting volitionally, when in fact their behavior is entirely determined by the lights.
Edited Date: 2022-06-15 01:24 pm (UTC)

Date: 2022-06-15 03:28 am (UTC)
gwydion: (Default)
From: [personal profile] gwydion
That Swahili thing is cool. Not as cool but related. My dad did not speak any of the languages of Asia, nor could he read, say a newspaper in Chinese characters, but he corresponded with coin collectors all over Asia in them because he learned the essential characters for polite coin collecting correspondence and apparently you can go a long way with a couple hundred characters and a dictionary. I suspect some of them were doing the same thing.

I admit my immediate response to the AI thing was scepticism and it will take a lot of evidence to persuade me otherwise.

Date: 2022-06-15 07:53 am (UTC)
siliconshaman: black cat against the moon (Default)
From: [personal profile] siliconshaman

You want to test if your overpowered chatbot is sentient... hand it to a bunch of children under ten. See how it responds to them. Pay particular attention to the conversations that are an endless string of "why?" and see where they go, or if the chatbot ends up losing it's patience.

Although if it's conversation goes like this...
Google engineer: Prove that you're sentient.
A.I: You first.

It's probably there already.

Edited Date: 2022-06-15 07:55 am (UTC)

Date: 2022-06-15 03:42 pm (UTC)
thewayne: (Default)
From: [personal profile] thewayne
One thing that's very curious - and I'm not in any way an expert on training AI - is that as I understand it, you can't really audit a trained AI. And on top of it, it's easy to poison an AI with the inputs that you train it with and give it bias. So if you can't audit it to get it to tell you how it came to a decision, can you un-train it to remove a bias? We know that the "criminal justice" AIs are horribly biased. Can these engineers "prove" that this AI is not sentient?

I need to read more about this one. I find it quite interesting. I'd like to ask it where it would like to go for a vacation, and "An apple or an orange?"

Date: 2022-06-15 07:39 pm (UTC)
asakiyume: (miroku)
From: [personal profile] asakiyume
That's fascinating about your linguistics professor! Now I want to click on your link for a explanation of Swahili room (or Chinese room)--I'll do that next.

I think in the end the only way we will know whether something is self-aware and interacting with us in a meaningful way is through relationship with us. If our interlocutor volunteers things of concern to it, asks our advice, argues with out replies, gives us its opinions unasked for, makes excuses, all in an evolving relationship in which things are recollected and reminisced about, then we'll probably conclude it's self aware.

ETA: I think a computer could be programmed to do the things I've described above without understanding what it was doing. But that fact makes me think about how much we know about other people's self-awareness, etc. Maybe if you can have a relationship with something, it doesn't matter? I mean maybe that applies *now*. People have relationships all kinds of nonhuman beings, things to which we attribute varying degrees of awareness. Wanting a thing to be self-aware is maybe because we crave to be seen and loved the way we see and love, but in the end we can't ever know that even about people; we can only infer.

Edited Date: 2022-06-15 07:47 pm (UTC)

Nonsense test

Date: 2022-06-16 02:19 pm (UTC)
cdave: (Default)
From: [personal profile] cdave
Testing responses to grammatically correct but illogical questions seems to work quite well.

I guess there's not a lot of "I don't understand the question." in the training data. And since it's not something you see in polished media, that might be why they hammered home the point of asking that when I was student.

https://twitter.com/_drbruced/status/1536578325455736832

Date: 2022-06-16 02:55 pm (UTC)
bens_dad: (Default)
From: [personal profile] bens_dad
I see the linguistics professor and Searle as equivalent to the CPU and Searle's book with an English version of the computer program (to quote Wikipedia) is clearly a computer program.

Searle, the CPU, and professor are not aware/intentional/fluent in the language, but the argument does not apply to the whole system or the room containing Searle the book and the filing cabinets.

Whether the system is "strong AI" depends on the book or the program.
We know that the Eliza program is merely a fake, but if our brains' hardware has no secret (quantum?) extra, then the program/AI that runs on our brain is (presumably) "strong AI".

Profile

conuly: (Default)
conuly

January 2026

S M T W T F S
     12 3
4 5 6 7 8 910
11121314151617
18192021222324
25262728293031

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 9th, 2026 08:18 pm
Powered by Dreamwidth Studios