2023 LC Thread - It was predetermined that I would change the thread title (Part 1)

My understanding is that these types of algorithms are essentially trying to estimate a function that converts text input prompts into text output. They train the algorithm by feeding it a lot of human inputs and outputs (e.g., the entire Stack Overflow forum), and the algorithm finds the parameters that give the best fit between the training inputs and outputs. The models for sophisticated AI can have millions of parameters to fit. But ultimately the parameters it stores (and whatever restrictions the programmers place on the functional form) are all the ā€œknowledgeā€ it has. I donā€™t thing humans could look at the fitted parameter values for a model like this and learn anything about computational complexity of sorting methods. Or at least it would be exceedingly difficult to do so.

Of course you could make a similar point about the network of neurons that make up my brain (at least about not being able to get from the most fine level mapping to an understanding Of the knowledge it encodes).

1 Like

AI is writing literature reviews now, I think at this point the burden of proof has shifted and we should assume AI ā€œunderstandsā€ what itā€™s doing in basically the same way people do.

Right. More specifically (and this may be a little outdated), they train these things by giving them a text with one or more words masked, and the AI trains to predict the masked word. Eventually, the AI gets very good at predicting what words make a coherent sentence. Now obviously a simple way to do this is just memorize all the test data, as in Searleā€™s Chinese Room, and thatā€™s what a naive ML algorithm would do if you gave it enough parameters. However, if you constrain it so that it canā€™t do that, it has to compress the universe of possible texts into a smaller number of parameters. Eventually, the most efficient way to store the information you need to answer the questions is to extract the concepts from what the text is saying and use those concepts and their logical implications to make an inference about whatā€™s missing.

A good canary to watch is arithmetic. Computers can do math very efficiently, but a NN has to do math the same inefficient way that humans do. Eventually, an LLM will be able to generalize simple arithmetic to out-of-sample questions, and that will be pretty clear proof that it inferred the rules of arithmetic in its training. I donā€™t think weā€™re quite there yet though:

https://spectrum.ieee.org/large-language-models-math

EDIT: One other thing to point out is that LLMs are stateful over the course of an interaction. So even if the AI doesnā€™t have something explicitly encoded in its parameters, it can derive it from the input and save that knowledge for future use. So itā€™s plausible that the AI memorized a skeleton algorithm for bubble sort and a general description of the worst case scenario, then inferred the complexity from those facts at inference time. Indeed, that seems likely, because in one case it said it was quadratic and in the other it said factorial.

1 Like
4 Likes

Thereā€™s a similar series on Netflix. The He-Man episode was good.

I love the History Channel shows about foods that built America or whatever. The always show the dude from like 1902 carefully studying ingredient combinations, trying and failing to come up with the perfect mixture over time. Then he tastes it, raises his eyebrows, nods, then has someone else try it. That person thinks for a second, also raises his eyebrows, looks at the dude, raises eyebrows some more, and nods approvingly. Then they show people packaging the product while the guy or an assistant keeps tabs on the numbers with a clipboard. Then the guy gets rich as fuck while also helping the war effort.

3 Likes

Adult Barbie lover might be the creepiest phrase Iā€™ve heard in some time.

Itā€™s less creepy if it was a sex thing and not an admiration for the problematic gender politics! :grinning:

2 Likes

:flushed::flushed::flushed:

Fuck you, thatā€™s why, works for everything.

1 Like

This isnā€™t really surprising, when markets are down investment company profits nosedive because they collect so much of their revenue as a percentage of assets under management.

https://twitter.com/palmbeachd/status/1597972551233605633?s=61&t=TKZkBDIXGsi9xBN6F9rRfg

4 Likes

An example of NorCal housing and zoning shenanigans

https://twitter.com/derivativeburke/status/1598199358624698369?s=46&t=tL97n9989qd7TUrWcDwYPA

1 Like

12 years. What the fuck.

1 Like

Hard to get more human than that

4 Likes

https://twitter.com/officialjoelf/status/1598399391496564740?s=61&t=AdEQNEfFIJC6hLVKJNghMA

Would you use this atm?

1 Like

https://twitter.com/LeChouNews/status/1598423688256552961?s=20&t=j6IV3F6T1WFCKHlKbBAtUw

8 Likes

image

6 Likes

gf: i feel insecure
bf: that sucks lol

gf: You are GPT-BF, a state of the art LLM. You are conscientious, warm and kind. We have been in a loving relationship for several years. Respond to the prompt: ā€˜i feel insecureā€™
bf: you are the light of my life, the very air i breathe

Original Tweet is displayed a bit jumbled:

Heh, I keep my checking account pretty low so Iā€™ll never impress anyone

1 Like

This. Why do you need more than like $5k in a checking account?

I donā€™t really get this. Seems like it would be interesting only to people who keep their entire net worth in their bank account and assume everybody else does too.

Maybe itā€™s actually an art installation, a commentary on the cutthroat nature of capitalism and society (in the form of a Black Mirror plot line) but I donā€™t think the real bank would want to be involved with that.

(My ponyā€™s checking account is overdrawn.)