conuly: (Default)
[personal profile] conuly
a search engine, a database, an encyclopedia, a person, or otherwise useful in any way with the task of finding information.

Stop asking it to do what it was not made to do! And when it fails to do the thing, stop saying things like "I think it has the right book, it just can't give it to me for some reason" or "It admitted it was lying". No? Neither of those things is true!

It's a fancy, shiny Chinese room. Don't look too hard, though, because that fancy shiny paint cracks very easily.

Date: 2023-04-28 03:18 pm (UTC)
thewayne: (Default)
From: [personal profile] thewayne
I've been using ChatGPT 4 occasionally, and I find it useful in some ways. For example, I had a performance in Lord of the Rings Online and I asked it to provide me with a list of rock songs from the '70s and '80s that referenced stars or planets, and it gave me a pretty good list.

On another occasion, I asked it to translate phrases from English or French to Sindarin, the language of Elves. I need to double-check one because the second looked good but was ultimately a fail. I had it do the translation again a few weeks later, and it came up with something totally different. But since Google Translate doesn't handle Sindarin, it's not all that easy to work with.

Date: 2023-04-28 03:24 pm (UTC)
ioplokon: purple cloth (Default)
From: [personal profile] ioplokon
yeah, i call it clever hans (the counting horse), but i feel this is somewhat unfair to clever hans.

it's also like... it's not going to do the things people want it to unless the fundamental way it is built changes & afaik there haven't been a ton of conceptual advances in ml recently, just people willing to throw more compute at training.

Date: 2023-04-28 03:44 pm (UTC)
8hyenas: (Default)
From: [personal profile] 8hyenas
I use it to make my bullet point summaries at work into boring paragraphs. A task which I loathe. And to write sympathy/congratulations and other rote response things, which I am horrendous at.
I've used it for recipes and found it more useful in some ways than google. When I just want to know if I sub yeast 1:1 when doubling a recipe, or if I can use cornmeal for semolina. No big consequences if it's wrong and I don't have patience for recipe blogs. I'm not sure if this is a pro or not but google doesn't work properly with my VPN, so I DO use chatgpt as a question answering engine. Just not a very reliable one.

I'd also really like to know WHY it gets some of the answers wrong. For instance it had a fact about snakes wrong, when I asked it to check that fact it corrected itself. But... I understand its information is tiered, and specific species info is probably in a lower info tier which isn't as readily accessed. However this wasn't a debated fact, or controversial or anything. Just a factoid. And how did it correct itself without real time access? And how is it responding to questions without accessing the full info? Is it just doing a surface search and if the info isn't available in tier one assuming it's a negative without looking in tier two? What's going on in that box?!

Date: 2023-04-28 03:48 pm (UTC)
From: [personal profile] hashiveinu
Someone wrote a good description of how it works here: https://gallusrostromegalus.tumblr.com/post/712643937414840320/chinese-room-2

Date: 2023-04-28 04:25 pm (UTC)
redbird: closeup of me drinking tea, in a friend's kitchen (Default)
From: [personal profile] redbird
I think part of why it gets things wrong is that it tends to ignore some kinds of negations, and introductory phrases like "people once thought." So "people used to think you should treat burns with butter, which is a bad idea" may be stored as "you should treat burns with butter" and "only small children believe reindeer can fly" may produce "reindeer can fly."

It reminds me of a game show which awarded points for correctly guessing which answers a panel of 100 ordinary people gave. Things like what flavor of ice cream they liked, where yes "what is the most popular flavor" is answerable, but it's not going to help me decide what kind of ice cream to buy.

On the other hand, it didn't take ChatGPT to turn "you can't caramelize onions in ten minutes, here's how long it really takes" into a link at the end of yet another article claiming that ten minutes is enough time for that.

Date: 2023-04-28 04:33 pm (UTC)
calimac: (Default)
From: [personal profile] calimac
It gave me a straight answer to a question I was never able to get a useful response to from any humans, which was "WTF does the phrase 'only connect' mean?"

Date: 2023-04-28 05:13 pm (UTC)
redsixwing: A red knotwork emblem. (Default)
From: [personal profile] redsixwing
Hear hear.

It's a lying oracle. I keep wanting to write about that, being that the only way I know of to beat the lying oracle is to constrain when it can lie.

Date: 2023-04-28 05:19 pm (UTC)
movingfinger: (Default)
From: [personal profile] movingfinger
Clever Hans was the Sherlock Holmes of horses, going by his observational capacity.

Date: 2023-04-28 05:51 pm (UTC)
ioplokon: purple cloth (Default)
From: [personal profile] ioplokon
agreed. like 'this horse is really good at cold reading' is in some ways more impressive?

Date: 2023-04-28 07:02 pm (UTC)
silveradept: A kodama with a trombone. The trombone is playing music, even though it is held in a rest position (Default)
From: [personal profile] silveradept
It does not do finding accurate information. I feel like it might make for a useful backdoor analysis tool of what "everyone knows," supposedly, because whatever it outputs is likely to be the conventional wisdom, statistically, even and especially when the conventional wisdom is flat-out wrong.

Date: 2023-04-28 07:40 pm (UTC)
siliconshaman: black cat against the moon (Default)
From: [personal profile] siliconshaman

Well.. true. It writes quite good haikus though. Better than I could.

Date: 2023-04-28 08:22 pm (UTC)
cesy: "Cesy" - An old-fashioned quill and ink (Default)
From: [personal profile] cesy

I think the most useful analogy I've found so far is predictive text autocomplete - it strings together the most likely next word and so often fails in the same kinds of ways as autocarrot.

Date: 2023-04-28 09:03 pm (UTC)
mindstalk: (Default)
From: [personal profile] mindstalk
"I'd also really like to know WHY it gets some of the answers wrong"

AFAIK it's a very very very fancy Markov chain. Ultimately it's just choosing next words based on probabilities conditional on previous words, except instead of a simple "last two words" it's got multiple layers and billions of nodes. But it's still rolling dice. "Coral snakes are _" is most likely to be completed by 'poisonous' but hey, it could pick some other completion like "pretty" or "not poisonous".

Date: 2023-04-29 12:24 am (UTC)
spiffikins: (Default)
From: [personal profile] spiffikins
ANNNND this is why I *haaaatttteee* when customers ask us "do you support AI yet?" in our software, or say "we want to use AI to solve this problem"

Because inevitably the "problem" is something like "we have a ton of VERY COMPANY-SPECIFIC data in non-computer parseable formats (think - scanned pdf files) across our organization" and right now humans need to search for the matching documents, read them, and pull out the bits of data that they need to generate information. And the complaint is "this is hard work"

So we say - you don't really need AI to solve this problem? You need to provide these data points in a parseable electronic format, and tell us what they *mean* - i.e. "this number on this document goes in field X on the form" - and then we can TOTALLY automate that for you? No AI required!

But they say "oh, we cannot possibly figure out where this data is actually stored! Can't you just use AI?"

And I sigh deeply and give up.

Because, guess what? ChatGPT does not have ANY of thse little squares in its bag - this data is proprietary drug formulations and pricing stuff - for drugs that are still in development - HOW exactly do you think ChatGPT or any other AI out there is going to know ANY of this???

"AI" is the new "Web services" - 15 years ago, all anyone wanted to know is "do you support web services?" - and nobody actually knew what they did or how they would benefit them - they just heard that "web services" was cool and they should have it. 10 years ago it was "virtual machines" - everyone wanted to know if we supported "virtual machines"

People are dumb.

Date: 2023-04-29 02:40 am (UTC)
shadowkat: (Default)
From: [personal profile] shadowkat
It sounds very headache inducing. Although this is kind of true of most technological advances. And I have far too many of them to figure out as it is - I will avoid this one for as long as feasibly possible.

Date: 2023-04-29 10:58 am (UTC)
8hyenas: (Default)
From: [personal profile] 8hyenas

Thank you, this was very helpful! (I used predictive text here, and I'm letting it stand but need to acknowledge it.)

Date: 2023-04-29 11:57 am (UTC)
hudebnik: (Default)
From: [personal profile] hudebnik
I think you're still treating it as reasoning about the world, based on information ("tiered" or otherwise) about the world. ChatGPT and its cousins at other companies (like my employer's Bard) have NO information directly about the world; they have a whole lot of words, sentences, and paragraphs that people have written over the centuries. Things that people have written frequently, such as "this is just the tip of the iceberg" or "we stand at a crossroads" or "2 + 2 = 4", are more likely to come out than things people have written rarely, like "my hovercraft is full of penguins" or "2 + 2 = 5". Obscenities, racial and sexual epithets, etc. fall into the former category, but the programmers have added some special-case rules so the public doesn't see them so often. I think they've also put a thumb on the scale (see what I did there?) in favor of both-sides-ism and disclaiming authoritative information, in hope that the public doesn't treat them as more factually authoritative than they are.

By and large, objectively true statements get repeated more often than objectively and obviously false statements, so they're ranked higher and more likely to come out. But as we all know, objectively and obviously false statements can be repeated a lot too, if they're appealing enough or serve the interest of somebody powerful.

Date: 2023-04-29 12:17 pm (UTC)
ancarett: Change the World - Jack Layton's Last Letter (Default)
From: [personal profile] ancarett
We've had several students use it at the university. The ones who aren't happy when we know this are the ones who think that ChatGPT functions like a search engine rather than a generator when it comes to providing facts and references.

All those quotations and references, even the articles and books, that a ChatGPT-generated essay includes can be plausible fictions. They rarely actually exist.

Date: 2023-04-29 01:23 pm (UTC)
8hyenas: (Default)
From: [personal profile] 8hyenas
Yeah I got it after someone earlier mentioned a markov chain. for some reason metaphors weren't doing it for me lol

Profile

conuly: (Default)
conuly

February 2026

S M T W T F S
1 2 3 4 5 6 7
891011121314
15161718192021
22232425262728

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Feb. 8th, 2026 06:29 am
Powered by Dreamwidth Studios