ChatGPT is not...
May. 3rd, 2023 10:16 ama search engine, a database, an encyclopedia, a person, or otherwise useful in any way with the task of finding information.
Stop asking it to do what it was not made to do! And when it fails to do the thing, stop saying things like "I think it has the right book, it just can't give it to me for some reason" or "It admitted it was lying". No? Neither of those things is true!
It's a fancy, shiny Chinese room. Don't look too hard, though, because that fancy shiny paint cracks very easily.
Stop asking it to do what it was not made to do! And when it fails to do the thing, stop saying things like "I think it has the right book, it just can't give it to me for some reason" or "It admitted it was lying". No? Neither of those things is true!
It's a fancy, shiny Chinese room. Don't look too hard, though, because that fancy shiny paint cracks very easily.
no subject
Date: 2023-04-28 03:18 pm (UTC)On another occasion, I asked it to translate phrases from English or French to Sindarin, the language of Elves. I need to double-check one because the second looked good but was ultimately a fail. I had it do the translation again a few weeks later, and it came up with something totally different. But since Google Translate doesn't handle Sindarin, it's not all that easy to work with.
no subject
Date: 2023-04-28 03:24 pm (UTC)it's also like... it's not going to do the things people want it to unless the fundamental way it is built changes & afaik there haven't been a ton of conceptual advances in ml recently, just people willing to throw more compute at training.
no subject
Date: 2023-04-28 03:51 pm (UTC)no subject
Date: 2023-04-28 05:19 pm (UTC)no subject
Date: 2023-04-28 11:56 pm (UTC)no subject
Date: 2023-04-28 05:51 pm (UTC)no subject
Date: 2023-04-28 11:56 pm (UTC)no subject
Date: 2023-04-28 03:44 pm (UTC)I've used it for recipes and found it more useful in some ways than google. When I just want to know if I sub yeast 1:1 when doubling a recipe, or if I can use cornmeal for semolina. No big consequences if it's wrong and I don't have patience for recipe blogs. I'm not sure if this is a pro or not but google doesn't work properly with my VPN, so I DO use chatgpt as a question answering engine. Just not a very reliable one.
I'd also really like to know WHY it gets some of the answers wrong. For instance it had a fact about snakes wrong, when I asked it to check that fact it corrected itself. But... I understand its information is tiered, and specific species info is probably in a lower info tier which isn't as readily accessed. However this wasn't a debated fact, or controversial or anything. Just a factoid. And how did it correct itself without real time access? And how is it responding to questions without accessing the full info? Is it just doing a surface search and if the info isn't available in tier one assuming it's a negative without looking in tier two? What's going on in that box?!
no subject
Date: 2023-04-28 03:53 pm (UTC)But if he puts together an invalid combination of symbols he doesn't get bupkis.
And that's how it works.
He also doesn't know anything about snakes, and neither does ChatGPT. It only puts together arbitrary symbols ("words") in ways that are probable.
no subject
Date: 2023-04-29 12:24 am (UTC)Because inevitably the "problem" is something like "we have a ton of VERY COMPANY-SPECIFIC data in non-computer parseable formats (think - scanned pdf files) across our organization" and right now humans need to search for the matching documents, read them, and pull out the bits of data that they need to generate information. And the complaint is "this is hard work"
So we say - you don't really need AI to solve this problem? You need to provide these data points in a parseable electronic format, and tell us what they *mean* - i.e. "this number on this document goes in field X on the form" - and then we can TOTALLY automate that for you? No AI required!
But they say "oh, we cannot possibly figure out where this data is actually stored! Can't you just use AI?"
And I sigh deeply and give up.
Because, guess what? ChatGPT does not have ANY of thse little squares in its bag - this data is proprietary drug formulations and pricing stuff - for drugs that are still in development - HOW exactly do you think ChatGPT or any other AI out there is going to know ANY of this???
"AI" is the new "Web services" - 15 years ago, all anyone wanted to know is "do you support web services?" - and nobody actually knew what they did or how they would benefit them - they just heard that "web services" was cool and they should have it. 10 years ago it was "virtual machines" - everyone wanted to know if we supported "virtual machines"
People are dumb.
no subject
Date: 2023-04-28 04:25 pm (UTC)It reminds me of a game show which awarded points for correctly guessing which answers a panel of 100 ordinary people gave. Things like what flavor of ice cream they liked, where yes "what is the most popular flavor" is answerable, but it's not going to help me decide what kind of ice cream to buy.
On the other hand, it didn't take ChatGPT to turn "you can't caramelize onions in ten minutes, here's how long it really takes" into a link at the end of yet another article claiming that ten minutes is enough time for that.
no subject
Date: 2023-04-28 09:03 pm (UTC)AFAIK it's a very very very fancy Markov chain. Ultimately it's just choosing next words based on probabilities conditional on previous words, except instead of a simple "last two words" it's got multiple layers and billions of nodes. But it's still rolling dice. "Coral snakes are _" is most likely to be completed by 'poisonous' but hey, it could pick some other completion like "pretty" or "not poisonous".
no subject
Date: 2023-04-29 10:58 am (UTC)Thank you, this was very helpful! (I used predictive text here, and I'm letting it stand but need to acknowledge it.)
no subject
Date: 2023-04-29 11:57 am (UTC)By and large, objectively true statements get repeated more often than objectively and obviously false statements, so they're ranked higher and more likely to come out. But as we all know, objectively and obviously false statements can be repeated a lot too, if they're appealing enough or serve the interest of somebody powerful.
no subject
Date: 2023-04-29 01:23 pm (UTC)no subject
Date: 2023-04-28 03:48 pm (UTC)no subject
Date: 2023-04-28 03:51 pm (UTC)no subject
Date: 2023-04-28 04:33 pm (UTC)no subject
Date: 2023-04-28 05:13 pm (UTC)It's a lying oracle. I keep wanting to write about that, being that the only way I know of to beat the lying oracle is to constrain when it can lie.
no subject
Date: 2023-04-28 07:02 pm (UTC)no subject
Date: 2023-04-28 11:57 pm (UTC)But weirdly, even if our text can be predicted, people simply can't.
no subject
Date: 2023-04-28 07:40 pm (UTC)Well.. true. It writes quite good haikus though. Better than I could.
no subject
Date: 2023-04-28 08:22 pm (UTC)I think the most useful analogy I've found so far is predictive text autocomplete - it strings together the most likely next word and so often fails in the same kinds of ways as autocarrot.
no subject
Date: 2023-04-28 11:58 pm (UTC)no subject
Date: 2023-04-29 02:40 am (UTC)no subject
Date: 2023-04-29 12:17 pm (UTC)All those quotations and references, even the articles and books, that a ChatGPT-generated essay includes can be plausible fictions. They rarely actually exist.